report
stringlengths
320
1.32M
summary
stringlengths
127
13.7k
ATM transactions involve several different participants, including the customer at the terminal (the “consumer”); financial institutions such as banks, thrifts, and credit unions; and other entities involved in electronically processing transactions. Financial institutions typically issue debit cards to account holders that can be used for purchases at the point of sale or to conduct ATM transactions. ATM owners can be depository institutions, such as banks and credit unions, or public or private companies, such as merchants or independent operators that specialize in offering ATMs and related processing and support services. Independent operators may own their ATMs as well as provide ATMs under contract to merchants (for which the independent operators provide processing and other support services). Some independently owned ATMs are “branded” ATMs, where an independent firm owns and operates the machine but a financial institution pays for the right to display its logo on the terminal and to allow its customers to access the machine free of charge. Independent ATM operators must have a depository institution that sponsors their membership in the EFT networks that process ATM transactions. Financial institutions have a relationship with account holders outside of ATM transactions, while independent ATM operators’ sole relationship with the consumer is through use of the ATM. The EFT networks provide the infrastructure that allows funds to be transferred electronically and provide a means for an ATM card from one financial institution to be used at another financial institution’s or independent operator’s ATM. EFT networks route transactions between the ATMs and the card-issuing financial institutions and act as a clearinghouse to settle those transactions. They establish the rules and requirements for any financial institution that chooses to participate in the EFT network. These networks perform millions of transactions monthly. The steps involved in an ATM transaction depend on whether or not a consumer uses an ATM owned by his or her financial institution (typically referred to as an “on-us” transaction) or an ATM owned by another financial institution or firm (typically referred to as an “off-us” transaction). On-us transactions are not routed through the EFT networks but instead are processed internally by the consumer’s financial institution. In both cases, the ATM transaction begins when the consumer inserts the ATM or debit card into an ATM, enters the personal identification number, selects the transaction to be performed—such as a cash withdrawal from the consumer’s checking account—and enters the amount of the transaction. For off-us transactions, the terminal sends this information to the sponsoring financial institution to identify the card-issuing financial institution, which determines the EFT network used to route the transaction. The EFT network passes the request for authorization to the consumer’s financial institution, which approves or denies the transaction based on the terms and conditions of the consumer’s account and the availability of funds. The approval or denial message is sent back to the ATM terminal, first through the EFT network and then through the sponsoring bank. If the transaction is authorized, the consumer receives the requested cash, and the transaction is posted to the consumer’s account, deducting the amount of money the consumer received at the ATM plus any fees assessed. Using the EFT network, the consumer’s financial institution pays the ATM owner the withdrawal amount plus any assessed surcharges. Figure 1 depicts the transaction flow among the parties involved in an off-us ATM transaction. Several fees can be paid by the consumer and the other participants in order to process an ATM transaction. Consumers may be charged two types of fees. First, the ATM owner may assess a surcharge fee on the consumer for conducting a transaction at the ATM. Federal regulations require ATM operators to provide notice that the surcharge fee will be imposed and disclose the amount of the fee on the ATM screen before the consumer commits to paying the fee. Surcharge fees also appear on the transaction receipt and again on the consumer’s account statement— sometimes combined with the cash withdrawal amount. Second, a foreign ATM fee is a fee that may be assessed by the consumer’s financial institution when the consumer uses an ATM owned by another ATM operator. The foreign ATM fee is not disclosed at the ATM. Rather it is provided to consumers in information they receive when they open their account, in fee disclosures, and on their periodic statements when they incur the fee. The consumer’s financial institution and the ATM operator also pay certain fees to process an ATM transaction. Specifically, the interchange fee is set by the EFT networks and paid by the consumer’s financial institution to the ATM owner for the costs of placing and maintaining ATMs. The switch fee is assessed by the EFT networks on the consumer’s financial institution to pay for processing each of its network transactions. Finally, the acquiring fee is paid by the ATM owner to the EFT networks for use of the networks to conduct the ATM transaction. Table 1 provides a summary of the fees paid by consumers, financial institutions, and ATM operators during an ATM transaction. Financial institutions and independent ATM operators have different business models and, as a result, set ATM surcharge fees differently. Financial institutions operate ATMs as a convenience to their own account holders, who generally do not pay fees to use these ATMs. However, financial institutions do assess a surcharge fee when a transaction is conducted by nonaccount-holding consumers. Independent ATM operators charge surcharge fees to most customers, and in many cases operators work with merchants to determine those fees. According to industry estimates, there are approximately 420,000 ATMs currently operating in the United States, and financial institutions operate just under half of those machines, either in their own facilities or at off-site locations, such as shopping centers, drug stores, and grocery stores. Those financial institutions that responded to our survey—representing 81,833 ATMs—reported they placed 57 percent of their total ATM fleet at bank facilities, while 43 percent were located off site. Financial institutions typically own or operate their ATMs. In some cases, financial institutions partner with independent firms to operate branded ATMs that carry the financial institution’s logo and look and function as if they belong to the institution but are owned by an independent ATM operator. As previously discussed, under these arrangements, the account holders for that financial institution are allowed to use the branded ATMs without paying a fee. Financial institution representatives told us they view access to ATMs as a key service they provide to account holders. A representative from a large national bank said that its personal banking business is driven by customer convenience. Therefore they view access to ATMs as an important service to their account holders and have invested in a large fleet of ATMs for customer use. Likewise, a community banker we interviewed said that they view their ATMs as a way of extending the bank’s hours for customers to receive cash and make deposits. Representatives from two large national banks said that they consider where their customer base is located when determining where to place an ATM. One large bank representative noted that the bulk of the bank’s ATM business is its own customers, so it invests in ATMs and places them in locations near the greatest numbers of account holders. Financial institutions that have smaller fleets of ATMs, such as some credit unions and community banks, may offer their account holders access to ATMs by participating in a surcharge-free network. When a financial institution enrolls in a surcharge-free network, all ATMs in that network are available to their account holders surcharge free. Financial institutions generally do not charge their own account holders for transactions conducted within their own ATM network. When establishing surcharge fees charged to nonaccount-holding customers, the financial institutions we surveyed most frequently cited three factors that they consider, the first being competition—the fees being charged by nearby ATMs. Likewise, one bank representative we interviewed said that they do not want the fee to be so high that they turn away potential customers; instead, they want account holders from other financial institutions to use their ATMs and based on that experience, open accounts at their bank. Similarly, one of the community bankers we spoke to said that they try to set their fee just below those of other ATMs in the area so that they can increase their own transaction volume. In contrast, a few industry representatives told us that generally fees are set higher at areas where there is more limited ATM competition, such as airports and amusement parks. The second most cited factor by survey respondents that they consider when setting surcharge fees was cost of operating the ATM. However, several of the financial institution officials we interviewed noted that the surcharge fees do not cover the costs of operating their ATMs, and the institution takes a loss on the ATM to provide the service to its account holders. The third most frequently cited factor the surveyed financial institutions take into account when setting surcharge fees was anticipated usage, or transaction levels. For example, representatives from two large banks noted that surcharge fees help ensure ATM availability for their account holders while also making the service available as a convenience for nonaccount-holding customers. Furthermore, the surcharge revenues at some locations can subsidize expensive or unprofitable ATM locations such as airports, colleges, and business districts. Independent ATM firms—those not part of a financial institution—own, operate, or service just over half of the nation’s ATMs in a variety of locations, such as gas stations and convenience stores, bars, restaurants, and small businesses, according to industry sources. The independent ATM industry is very diverse, with firms ranging in size from fewer than five ATMs to tens of thousands. According to information we have gathered, the two largest independent firms operate an estimated 47 percent of the independent ATM market. In addition to owning and operating their own ATMs, these independent firms offer a wide range of ATM-related services to merchants and other entities that own ATMs, such as monitoring and maintaining appropriate cash levels in terminals and processing transactions. There are four primary business models for independent ATM operators, shown in table 2. According to the ATM Industry Association, approximately 20 percent of the independent ATMs in the United States are owned by an independent ATM firm. The other 80 percent of independent ATMs are owned by merchants and retailers. In these situations, independent ATM firms provide varying levels of nonownership services and support to merchants, depending on the business model established. We found that the independent ATM firms included in our study had similarly diversified portfolios, where they both owned and operated ATMs while also providing services to merchants who owned the ATMs in their stores. We also found that, among the firms in our study, there was great variability as to the percentage of ATMs owned by the firm versus the merchant. For example, one smaller independent firm we spoke with owned 79 percent of the approximately 500 ATMs in its fleet, while the remaining 21 percent were merchant owned. In contrast, the two independent firms that participated in our survey—representing approximately 66,000 ATMs— owned 2 percent of their ATMs, while merchants owned the other 98 percent. Independent ATM operators generally levy a surcharge on consumers, although there are exceptions. Unlike financial institutions, which have a relationship with consumers who are also account holders through which they can gain revenue from other account fees, independent ATM operators’ only relationship with the consumer is through their use of the ATM. Most independent ATM operators charge a surcharge fee to consumers for the convenience of accessing their account from a machine outside their bank’s ATM network. In addition, as previously discussed, those consumers may be charged a foreign fee by their own bank for using these independent ATMs (or ATMs run by financial institutions other than their own). However, some transactions at independent ATMs are surcharge free because the ATMs may be bank- branded or may be part of one or more surcharge-free networks. When placing, operating, and servicing ATMs in retail space, independent ATM firms establish contracts with the merchants that specify which party will set the surcharge fee and how, if at all, those and other fee revenues will be shared. For turnkey and merchant-assisted ATMs, the surcharge fees are generally set by the independent ATM firms because they own the machines. For ATMs that are owned by merchants or retailers in the United States, the fees are set by either the merchant or a combination of the merchant and the independent ATM firm. A representative from one independent firm said that if the ATM is owned by the merchant, the firm is not involved in setting the surcharge fee, except to make suggestions to the merchant or to refuse to process the transaction if it determines the fee is exorbitant. Like financial institutions, when setting surcharge fees, independent operators typically consider fees at the nearby ATMs, the location, and operating costs. A representative from one independent ATM firm said that the market dictates the surcharge fee and he can only charge what his competitors are charging. The location of the ATM is also considered when setting the surcharge fee. Another independent operator said he considers the type of location where the ATM will be placed and the resulting demographic that will frequent that area. He evaluates what fee is competitive for the region or neighborhood and the type of location. For example, some ATMs have lower fees because they are placed in lower income areas. Another firm representative said that the surcharge fee in bars, nightclubs, and casinos is typically higher than the surcharge fee in a grocery store. The third factor is the cost of running the ATM terminal. An independent ATM operator we spoke with said that the firm establishes fees that, when combined with other revenues such as interchange fees, will provide sufficient revenue to cover the variable cost of processing transactions and the fixed cost of installing the ATM. Our review found that since 2007, surcharge fees assessed by financial institutions have generally increased. We also found that the percentage of financial institutions charging foreign fees and the amount ATM users pay in foreign fees has remained constant. However, consumers can obtain cash without paying ATM fees in a number of ways, such as using their own banks’ ATMs or requesting cash back at the point of sale. Our analysis shows that the prevalence and amount of ATM surcharge fees levied by financial institutions have generally increased since 2007, while foreign fees have generally remained constant in prevalence and amount, as seen in figure 2. We analyzed data obtained from a private vendor—based on annual surveys of hundreds of banks, thrifts, and credit unions on selected banking fees—and found that the percentage of financial institutions charging surcharge fees rose from an estimated 87 percent to 96 percent from 2007 through 2012. Of those institutions that charged a surcharge fee, the estimated average ATM surcharge fee increased from $1.75 in 2007 to $2.10 in 2012. The estimated median ATM surcharge fee rose from $1.56 in 2007 to $2.00 in 2012. Surcharge fees charged by financial institutions in our sample ranged from $0.28 to $5.52 in 2007, and the range was $0.45 to $5.00 in 2012. Meanwhile, foreign fees did not significantly change in prevalence and amount from 2007 to 2012. Our analysis of the data shows that the estimated percentage of financial institutions charging their customers a foreign fee between 2007 and 2012 has remained fairly constant at about 55 percent. In addition, our analysis shows that for institutions that charge a foreign fee, the estimated average fee did not significantly change between 2007 ($1.36) and 2012 ($1.42). The estimated median foreign fee was $1.09 in 2007 and $1.00 in 2012. Foreign fees charged by financial institutions in the sample ranged from $0.28 to $5.52 in 2007, and the range was $0.25 to $5.00 in 2012. We estimate that there were no statistically significant differences in 2012 in the prevalence of the surcharge fee based on the type and size of institution, or on geographic region or location—such as rural, urban, or suburban—in which the financial institution was located. The estimated average surcharge fee amount, for institutions that charged a fee, differed slightly by size and type of financial institution. In 2012, the estimated average surcharge fee for larger financial institutions was approximately $0.24 higher than the estimated average surcharge fee for smaller financial institutions, and banks’ estimated average surcharge fees were also $0.17 higher than the estimated average surcharge fees charged by credit unions. For example, in 2012, the estimated average surcharge fee for using a large financial institution’s ATM was $2.25, while the estimated average surcharge fee at a small financial institution’s ATM was $2.01. In contrast, there were no statistically significant differences in estimated average ATM fees in 2012 based on the type of location and geographic region in which the financial institution was located. See figure 3 for information on the estimated average surcharge fee amount based on several factors for 2012. We estimate that a higher percentage of large financial institutions charged a foreign fee in 2012 than small financial institutions. Specifically, an estimated 73 percent of large financial institutions charged a foreign fee in 2012 compared to an estimated 50 percent of small financial institutions. For those institutions charging the foreign fee, the estimated average fee amount was greater for large institutions and for banks. In 2012, large financial institutions had an estimated average foreign fee of $1.62, which is $0.26 more than the estimated average foreign fee charged by small financial institutions, which had an estimated average foreign fee of $1.36. Additionally, the average foreign fee was an estimated $0.23 higher at banks than credit unions in 2012. See figure 4 for more information on the estimated average amount of foreign ATM fees based on various factors for 2012. Historical or trend data for independent ATM operators’ fees are not available. However, we analyzed data from Informa Research Services on the fees charged by a judgmentally selected sample of 100 ATMs run by independent ATM operators in 2012. These data are not generalizable to the independent ATM population at-large. Our analysis of the Informa data shows that the average surcharge fee for the 100 independent ATMs surveyed was $2.24 in 2012. The median surcharge fee for independent ATMs included in the sample was $2.00. The surcharge fee ranged from $1.50 to $3.00. However, some independent ATMs may have surcharge fees that are higher or lower than those in our sample. While we do not have historical data on independent ATM surcharges, representatives from a large independent ATM firm told us that their average surcharge fees rose from $1.77 in 2002 to $2.46 in 2011. Aggregate data are also not available on the prevalence of surcharge fees among independent ATM operators. However, data obtained by mystery shoppers from a sample of 100 judgmentally selected independent ATMs in the top 10 metropolitan statistical areas, show that most of these ATMs charged a surcharge fee to the mystery shopper, but some transactions were conducted surcharge free. Specifically, in our sample of 100 independent ATMs, six mystery shoppers conducted a transaction for free using the selected ATM. In four of the six cases, the shoppers were able to conduct a transaction surcharge free since both the ATM and the shopper’s debit card displayed the logo of a surcharge- free network. In the other two cases where no surcharge was incurred, we sent a second mystery shopper to use the terminal, and the second shopper was charged a fee. We were unable to determine why the first mystery shoppers were not charged surcharge fees, but we did note that the ATMs had surcharge-free network logos on them. The independent ATM firms we surveyed reported similar results to the mystery shopping data. The two independent firms reported in our survey that out of 140,634,638 cash withdrawals at their ATMs in calendar year 2011, customers incurred a surcharge fee 97 percent of the time. However, the percentage of transactions that are surcharge free may vary depending on the extent to which the ATM operator is involved in surcharge-free networks or branding agreements. For example, one large independent ATM operator estimates that more than half of the transactions that occur on its ATMs do not generate a surcharge fee due to either a surcharge-free network or branding agreements. To obtain cash without incurring fees, consumers can generally withdraw cash at a bank branch during banking hours or use ATMs in their bank’s network. Our analysis indicates that a majority of transactions at financial institution ATMs may occur in this way. Specifically, according to our survey data, approximately 92 percent of the 3.3 billion reported transactions in calendar year 2011 at the financial institutions that responded did not incur a surcharge fee. At midsize and large banks we surveyed, ATM cash withdrawals did not incur a surcharge fee about 85 percent of the time. Additionally, our survey results showed that cash withdrawals at credit unions were surcharge free 95 percent of the time. Further, some industry representatives told us that financial institutions’ ATMs typically have a higher volume of transactions than independent ATMs. While our survey results are not generalizable to the total population of ATM operators, they did reveal that per-ATM transaction levels were much higher at financial institution ATMs than at independent ATMs for operators responding to the survey. As previously discussed, bank representatives we spoke to said that transactions at their ATMs are primarily from their own account holders, who do not incur fees for the transaction, and a large EFT network estimated that 80 percent of ATM transactions are on-us transactions and do not incur a fee. Consumers also avoid paying foreign fees in on-us transactions, because they are using machines within their financial institution’s ATM network. However, all consumers may not have convenient access to their own financial institution’s ATMs to obtain cash. Some account holders may live in areas with limited access to a financial institution facility, or need cash at a time when they are unable to go to their own financial institution—for example when attending a sporting event or while travelling. Additionally, some financial institutions participate in surcharge-free networks that allow their customers free access to ATMs outside their bank’s network of ATMs. In this way, a financial institution can expand the number and location of ATMs available to its customers. Three of the largest surcharge-free networks in the United States each offered more than 20,000 ATMs, and some customers whose financial institutions have enrolled in those networks can use those ATMs without incurring a surcharge fee. Four banks and eight credit unions in our survey reported that they participated in at least one surcharge-free network, expanding the number of surcharge-free ATMs available to their customers. For example, one credit union in the survey owned and operated 251 ATMs and enrolled in a surcharge-free network that gave its customers access to an additional 30,000 ATMs free of charge. However, ATMs in surcharge-free networks may not be available to all customers. One of the largest surcharge-free networks in the country states that 80 percent of the ATMs in its network are in metropolitan areas. Also, many financial institutions use branding agreements to expand their network of ATMs, which allows their consumers to withdraw funds from these ATMs without incurring fees. For example, the number of branded ATMs with financial institution logos increased from 11,900 in 2010 to 15,400 in 2011 for one large independent ATM operator, according to the operator’s annual reports. Among our surveyed financial institutions, branded ATMs accounted for approximately 16 percent of the total number of reported ATMs. One small independent ATM operator estimated that 55 percent of customers who use its ATMs pay a fee, and that percentage has decreased over the past 5 years due to an increase in financial institution branding and access to surcharge-free networks. Some financial institutions offer to refund ATM fees to account holders when they use an ATM. One community banker we spoke to said the bank gives consumers rebates on ATM fees up to $20 each month and that this approach is more cost effective than owning and maintaining a fleet of ATMs. Finally, industry participants we spoke with said that consumers are increasingly obtaining cash when making debit card purchases, which also allows them to avoid fees. One community banker we spoke to said that the bank educates customers and encourages them to obtain cash at the point of sale so that they do not incur ATM fees and the bank can have a smaller ATM fleet. However, as previously discussed, these options may not be available to all consumers, and we do not know the extent to which consumers obtain cash at the point of sale or receive ATM fee refunds. ATM operators incur a variety of costs—including rent to place the machines in retail locations and security costs to keep the machines safe, among others—and the amount of these costs varies widely among operators. ATM operators report taking a number of steps to respond to changing operating costs, such as increasing surcharge fees and investing less money in ATMs. However, operators also anticipate that many of these costs will rise in the future. ATM operators incur a wide variety of costs in providing ATM services. Our survey of a judgmental sample of the 10 largest banks and credit unions, 10 randomly selected midsize banks (“financial institution” operators), and 4 large independent ATM firms (“independent” operators) collected information on the following cost categories, all of which— except bank sponsorship—are typically borne by both financial institution and independent ATM operators. Rent. Financial institution operators pay rent for ATM facilities at locations not in an institution facility, and independent operators pay rent for retail or other locations, such as grocery stores or gas stations. Hardware and software investments. Operators purchase, install, and upgrade ATM software and equipment, including the ATM terminals and physical security equipment, such as bolting devices which secure the machines to walls or the floor. Cash services. Operators must take steps to ensure that ATMs are adequately stocked with cash and, therefore, spend time and resources monitoring transaction levels in order to accurately forecast future cash needs. Cash is delivered to the ATM via armored carrier. Independent operators also need to pay to access a supply of cash from a bank vault. Maintenance and repairs. Maintenance includes cleaning the machinery, making routine repairs, and restocking of supplies (such as receipt paper), as well as more significant repairs, which can incur higher costs for tools, parts, and labor. Physical security and insurance. Physical security costs are those incurred to keep the ATM and the surrounding lobby or area safe and include items such as lighting and cameras. Insurance costs include those policies that cover the cash in the machines. Infrastructure and processing. Operators need to install and maintain the telecommunications infrastructure necessary for ATM operations and transaction processing. Processing costs include fees associated with transaction processing (switch fees) and costs associated with interbank settlement and account posting. Network fees. ATM operators pay membership or license fees to the EFT networks in order to route transactions on the networks. Network fees also include any fees the ATM operator pays for membership in one or more surcharge-free networks. Taxes and licenses. In addition to property and sales taxes, ATM operators are sometimes required to pay for state and local licenses, on either a one-time or recurring basis. Regulatory and compliance costs. Regulatory and compliance costs include paying for the required ATM signage alerting customers to any fees, as well as the costs of regulatory inspections and reviews. Fraud prevention and fraud losses. Fraud prevention costs are those related to activities aimed at detecting and preventing ATM fraud. Fraud losses are those incurred by the ATM operator when fraud occurs, including cash theft and ATM robberies. Bank sponsorship. Bank sponsorship is a cost borne only by independent ATM operators, which, as previously discussed, must have a financial institution that sponsors their membership in the EFT networks. ATM operators we surveyed and spoke with indicated that key drivers of operating costs varied. For example, in our survey, large banks reported much higher costs for some categories, as a percentage of total costs, than did midsize banks and credit unions. Likewise, in some cost categories there was a wide range of reported per-ATM costs, while in other categories, the per-ATM costs were fairly consistent. None of the data we collected on costs are generalizable to either the financial institution or independent ATM operator populations at-large, although some costs, such as hardware and software investments, were mentioned as key drivers by many of the operators included in our work. Our analysis of the data collected from a sample of 30 institutions from three financial institution types—large bank, medium bank, and credit union—revealed some key differences in the biggest cost drivers for these ATM operators. The large banks’ costs for hardware and software investments were much higher as a percentage of their total costs than for midsize banks and credit unions in our survey. As shown in figure 5, the majority (63 percent) of the larger banks’ costs were for hardware and software investments and upgrades. The second most prominent cost for the large banks was rent (15 percent of overall costs), followed by maintenance and repair (9 percent of overall costs) and cash services (6 percent of overall costs). In contrast, the hardware and software investments were a much smaller percentage of reported total costs for the midsize banks (18 percent) and credit unions (23 percent) in our survey. In addition, midsize banks and credit unions that participated in our survey had much more even proportions of spending across the various cost categories. Midsize banks and credit unions also reported that a greater percentage of total costs were dedicated to infrastructure and processing (17 and 23 percent respectively), compared to the large banks (2 percent). In some cost categories institutions reported a wide range of per-ATM costs across the three financial institution types, while in other categories, the per-ATM costs were fairly consistent. For example, the financial institutions we surveyed reported that per-ATM costs in calendar year 2011 for rent and hardware and software were much higher for the large banks than for the credit unions and midsize banks. The average rent cost, on a per-ATM basis, was $27,173 for large banks, $4,935 for midsize banks, and $4,032 for credit unions. We saw similar results for hardware and software investments, with large banks reporting, on average, $28,607 in costs per ATM, while midsize banks’ and credit unions’ average costs were, respectively, $3,642 and $7,422. These hardware and software costs include both capitalized and noncapitalized items, although not all institutions reported noncapitalized costs. In contrast, the costs for maintenance and repairs were much closer in range for the three financial institution types. The average maintenance and repair cost, on a per-ATM basis, was $5,444 for large banks, $3,485 for midsize banks, and $5,827 for credit unions. In some cost categories credit unions reported higher costs than their bank counterparts—cash services, and infrastructure and processing. The average cost for cash services, on a per-ATM basis, was $6,847 for credit unions, $3,765 for midsize banks, and $3,495 for large banks. Similarly, the average cost for infrastructure and processing, on a per- ATM basis, was $7,958 for credit unions, $3,494 for midsize banks, and $1,191 for large banks. In contrast, credit unions’ costs for network fees were significantly lower than those of the banks, with an average cost, on a per-ATM basis, of $150 for credit unions, $331 for large banks, and $340 for midsize banks. For more information on average per-ATM costs across the three types of operators in all cost categories, see appendix II. Community bankers we interviewed reported that their leading costs were investments in hardware and software, largely to upgrade older machines for compliance with Americans with Disabilities Act (ADA) requirements, or to purchase new machines all together. Specifically, one representative told us his bank spent more than $52,000 to upgrade the software in the bank’s 26 ATMs, a cost of approximately $2,000 per machine. The community bankers noted that while the newer machines are more expensive, they offer the customer more functionality and value due to their enhanced capabilities, such as being able to scan and deposit checks. The other prevalent costs for the community bankers were fraud (prevention efforts and losses), repairs, processing, and network fees. Cost data are more limited for independent ATM operators, making identifying key costs difficult. One large independent ATM firm that we surveyed reported that its top five calendar year 2011 per-ATM costs (as a percentage of all costs) were (1) rent, (2) infrastructure and processing, (3) cash services, (4) hardware and software investments, and (5) bank sponsorship costs. Meanwhile the smaller firm that we surveyed reported that its top five calendar year 2011 costs (as a percentage of total costs) were (1) hardware and software investments, (2) maintenance and repairs, (3) cash services, (4) infrastructure and processing, and (5) rent. In addition to the cost information gathered through the survey, we interviewed two small independent ATM firms, and—like the two firms that participated in the survey—they told us their key costs included cash services and processing, among others. In addition, we interviewed officials from another large independent firm who provided us with cost data that indicated that the firm’s top costs for calendar year 2011 were rent, cash services, and maintenance and repair. Most operators included in our study—financial institutions and independent firms—reported that overall ATM costs have increased over the past 5 years and they expect that they will continue to do so. Ten financial institutions we surveyed reported that per-ATM costs had significantly increased in the past 5 years, ten reported costs had slightly increased, and five reported costs remained about the same. The cost drivers most frequently cited by survey respondents were upgrading to more versatile ATMs—with functions such as check-imaging—complying with ADA requirements, and upgrading software. Other cost drivers the financial institution representatives reported were increasing fraud prevention efforts, adding ATMs to their fleets, and paying more in network fees. As previously discussed, the community bankers we interviewed also indicated that ATM upgrades to comply with ADA requirements were a leading cost driver in calendar year 2011, and some said network costs had increased as well over the past 5 years. The two independent operators in our survey reported their costs had increased slightly over the past 5 years. One operator noted higher manufacturing costs as a driver and the other reported an increase in fuel prices (which increases the costs to transport cash to the terminals via armored carrier). The two independent operators we interviewed also said their overall costs have risen in the past 5 years, for armored carrier services and network fees. They also reported increased rent costs, among others. Some of the operators noted that the costs associated with certain actions could yield future savings. For example, while the cost of purchasing the newer machines has increased, those ATMs have improved technology, so service costs have declined. Similarly, one operator reported that with the expanded function of deposit-imaging, the newer ATMs have reduced paper costs. Many of the operators told us that during that same 5-year period, where costs have increased, ATM revenues have decreased. As previously discussed, ATM operators collect revenues from fees charged during some ATM transactions. In our survey, 14 financial institution operators reported that their per-ATM revenues had decreased—6 said they had done so significantly, and 8 said slightly. In contrast, 6 operators reported that revenues remained about the same, and 4 reported a slight increase during that same period. The most frequently cited reason the operators gave for the decreased revenues was declining transaction volumes— both overall and among non-account holders, who typically would generate income for the ATM operator through surcharge fees. Some financial institution operators stated that transaction levels are down due to the greater availability of surcharge-free networks and consumers obtaining cash at point-of-sale transactions, among other things. Several of the community bankers we spoke with expressed similar views that ATM revenues, along with transaction levels, have decreased generally over the past 5 years. The two independent operators in our survey reported their revenues had significantly decreased over the past 5 years, but cited reductions in their interchange fee revenues as a primary factor. The two smaller independent firms we interviewed reported similar issues with reduced interchange revenues. Looking forward, many of the operators indicated they expect that costs would continue to rise and that revenues would remain flat or decline in the future. For example, our survey results showed that 18 out of 25 financial institution operators anticipate ATM costs will increase in the future. The main drivers cited for these future increases are further investments in new and enhanced ATM terminals or upgrades that will be needed in order to comply with new or enhanced industry wide data security standards. The community bankers and one independent firm we interviewed expressed similar views that ATM costs will continue to rise. Similarly, of the ten financial operators that addressed future revenue trends in our survey, seven indicated they anticipate a decline, due primarily to fewer transactions. Likewise, one of the two independent firms that participated in our survey also said it anticipates future revenues will be flat. ATM operators reported taking various steps to adapt to the rise in costs and decline in revenues. For example, two credit unions reported in our survey that they recently raised their surcharge fee amounts in response to rising costs. Another credit union reported it was being more prudent in placing ATMs in new locations by first performing extensive evaluation of traffic flows, surrounding competition, and associated costs before committing to a new location. One of the large banks noted that because average revenues per ATM will likely continue to decline as consumers continue to avoid incurring surcharges, the bank would focus more on serving its own customers with ATM placements. Finally, some of the community bankers we interviewed said that as ATM transaction levels decline, so will their investments in ATMs. The owner of one one independent ATM firm we interviewed told us that independent operators need to seek opportunities to diversify their portfolio of ATM services, such as establishing or expanding branding partnerships in order to increase revenues. He also noted that maximizing the number of transactions on a per-ATM basis will be important. One large independent firm reported in the survey that as interchange fees and resulting revenues decrease, merchants—in those cases where they set fees—will increase the surcharge amounts to make up the difference. However, as we previously discussed, there are many factors taken into account when setting fees, and several of the operators told us that setting fees too high, above neighboring competitors, could discourage consumers from using their ATMs. We provided a draft of this report to CFPB, FDIC, the Federal Reserve, NCUA, and OCC for their review and comment. CFPB, the Federal Reserve, NCUA, and OCC submitted technical comments which were incorporated where appropriate. We are sending copies of this report to CFPB, FDIC, the Federal Reserve, NCUA, and OCC, interested congressional committees, members, and others. In addition, this report will be available at no charge on our website at http://www.gao.gov. Should you or your staff have questions concerning this report, please contact me at (202) 512-8678 or cackleya@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix IV. This report reviews the fees paid by consumers when conducting automated teller machine (ATM) transactions, as well as the costs borne by ATM operators in providing those services. Specifically, the objectives of this report are to discuss (1) the business models for ATM operators— financial institution and independent firms—and how they set ATM fees, (2) the amounts of fees that consumers incur to conduct ATM transactions and how these fees changed over time, and (3) the reported costs of ATM operations for financial institution and independent ATM operators and how costs and revenues are expected to change. To understand the history of ATMs, how ATM transactions are processed, and requirements for ATM operators, we reviewed prior GAO, regulatory, and industry reports on ATM fees and operations, and we interviewed relevant officials from the Board of Governors of the Federal Reserve System (the Federal Reserve), the Federal Deposit Insurance Corporation (FDIC), the Office of the Comptroller of the Currency (OCC), the National Credit Union Administration (NCUA), and the Bureau of Consumer Financial Protection, commonly known as CFPB. In addition, we interviewed officials from five associations: American Bankers Association (ABA) and the Independent Community Bankers of America (ICBA), which represent various sectors of the banking industry; the National ATM Council and the ATM Industry Association (ATMIA), which represent independent ATM operators; and U.S. PIRG, a federation of independent, state-based, citizen-funded organizations that advocate for consumer interests. We also interviewed representatives from two national banks, a credit union, one large independent ATM firm, three electronic funds transfer (EFT) networks, and a financial institution that sponsors independent ATM operators. In order to gather information on ATM costs and operations for smaller financial institutions and independent ATM firms, we conducted two group interviews with representatives from nine community banks and two interviews with smaller independent ATM firms. We identified these firms with assistance from ABA, ICBA, and ATMIA. In addition, we relied on information provided to us by ATMIA that described the composition of the independent operator market—specifically, the percentage of ATMs owned by operators versus those operated by merchants. This information could not be corroborated because no comparable data were available, either publicly or from the financial regulators and other industry sources we asked. However, we determined that our use of the information from ATMIA was appropriate because it is used to describe the independent ATM market and provide context. To discuss the operations and costs for ATM operators, in addition to the interviews discussed above, we surveyed a nonprobability sample of financial institutions and independent firms that operate ATMs to collect information on ATM operations and business models, ATM transaction levels for calendar year 2011, ATM costs for calendar year 2011 in 12 specific cost categories, overall ATM cost and revenue trends for the past 5 years and in the future, and factors ATM operators consider when setting ATM fees. In order to gain cost and operational information from ATM operators of various sizes, we deployed a survey to the 10 largest banks and 10 largest credit unions (by asset size), 10 randomly selected midsize banks (with assets between $10 billion and $50 billion), and 4 large independent ATM firms (with 10,000 or more ATMs in their portfolios). To select the banks, we used data from SNL Financial—a private financial database that contains publicly filed regulatory and financial reports. We eliminated those that did not offer personal checking account services, as well as any online banks, since they generally do not maintain substantial numbers of ATMs. To select the credit unions, we obtained a list of the largest credit unions, by asset size, from NCUA. Because none of the regulators and business associations we spoke with were able to provide data on the total population of independent ATM operators, and no data are publicly available, we relied on estimates provided to us by one of the largest independent operators as to the size and geographic locations of the independent firms in the industry. We used that list to select the firms and were able to verify the information provided only in the cases where the firm responded to the survey. For the survey questionnaire, we developed 12 categories (11 applied to all operators, and 1 was a cost incurred only by independent ATM operators) in order to capture information on a broad range of ATM operational costs. After we drafted our initial cost categories, we asked for comments from knowledgeable officials at the Federal Reserve, CFPB, ABA, and a consulting firm that works extensively with the independent ATM industry. We conducted six pretests to verify that (1) the questions were clear and unambiguous, (2) terminology was used correctly, (3) the questionnaire did not place an undue burden on respondents, (4) the information could feasibly be obtained, and (5) the survey was comprehensive and unbiased. We chose the six pretest institutions to include various sizes of ATM operators: two large banks, one large credit union, one midsize bank, and one large and one small independent ATM firm. We conducted the pretests over the telephone. We made changes to the content and format of the questionnaire after each of the six pretests, based on the feedback we received. For additional quality control, an independent evaluator within GAO also reviewed a draft of the questionnaire prior to its administration. Furthermore, we determined— based on the pretest with the smaller independent ATM firm—that the questionnaire would be overly burdensome for smaller firms to complete, potentially leading to minimal participation. For this reason, we limited our independent ATM firm sample population to firms with 10,000 or more ATMs. We sent the questionnaire by e-mail in an attached PDF form that respondents could return electronically after marking check-boxes or entering responses into open answer boxes. Alternatively, respondents could return the questionnaire by mail after printing the completed form. Through e-mails and phone calls in advance of the questionnaire, we determined the best contact at each financial institution or independent firm. We e-mailed the questionnaire with a cover letter to financial institutions between July 31 and August 1, 2012, and independent ATM firms between September 5 and September 11, 2012. Three weeks later, we sent a reminder e-mail to everyone who had not responded. We telephoned all respondents who had not returned the questionnaire after 4 weeks and asked them to participate. Completed questionnaires were accepted until September 28, 2012, for financial institutions and October 31, 2012, for the independent ATM firms. Questionnaires were completed by 9 out of 10 large banks, 9 out of 10 credit unions, 8 out of 10 midsize banks, and 2 out of 4 independent ATM firms. However, the number of respondents varied by question. Specifically, three large banks and one credit union completed sections of the questionnaire on ATM transaction levels and overall cost and revenue trends, but did not submit dollar amounts for the cost categories. As previously discussed, we made multiple contacts by telephone and e-mail to nonresponding institutions, but one large bank, one credit union, two midsize banks, and two of the independent ATM firms declined to participate. The practical difficulties of conducting any survey may introduce errors, commonly referred to as nonsampling errors. For example, difficulties in interpreting a particular question, sources of information available to respondents, or data analysis can introduce unwanted variability into the survey results. We took steps in developing the questionnaire, and in collecting and analyzing the data, to minimize such nonsampling error. Almost all responses from the PDF forms were directly read into a data file, and two analysts independently verified that all information provided in the forms was read in correctly. For the two forms that could not be read into the file, one analyst keypunched the responses and another verified the entries. All data analysis programs were independently verified for accuracy. We were not able to independently verify the cost information submitted by survey respondents. However, during the pretests and in the survey questionnaires we asked the respondents to tell us what sources they would or did use in calculating the costs they reported. Commonly cited data sources for the costs included internal accounting reports and billing statements from external third-parties, such as processors. Based on the information provided on the cost data sources and follow-up calls with survey respondents, we determined the data they reported were sufficiently reliable for our purposes. Using the data provided by the survey respondents, we calculated the number and type of ATM transactions in calendar year 2011, the percentage of total costs represented by each cost category, and the average per-ATM costs for each category. Due to the sensitive and possibly proprietary nature of the information we collected with the survey, we aggregated the cost data at a high level and presented it in a way that prevents individual organizations from being identified. For the four questions that asked about past and future cost and revenue trends, as well as the factors the operators take into account when setting fees, we performed a content analysis. Specifically, we analyzed the responses for each question and then grouped them into like categories. A second evaluator reviewed the categories to ensure that we were consistent in our coding. In any instance where the second reviewer disagreed with a categorization, team members met to discuss the categories and reached consensus on the final category assignment for each response. The numbers of responses in each content category were then summarized and tallied. For more detailed information on the survey results, see appendix II. To report on the amounts of fees that consumers pay to conduct transactions at financial institution ATMs, and how these fees changed over time, we purchased and analyzed data on surcharge and foreign ATM fees charged by banks and credit unions from 2007 through 2012 from Moebs $ervices, Inc. (Moebs), a market research firm that specializes in the financial services industry. Moebs collected its data through telephone surveys with financial service personnel at each sampled institution. In the surveys, callers used a “mystery shopping” approach and requested rates and fees while posing as potential customers. The statistical design of the survey for each year consisted of a stratified random sample by (1) institution type, (2) institution size, and (3) regions of the country defined by metropolitan statistical area and state. The surveys were completed in June for each of the years we requested, except for 2010, when the survey was completed in July. Table 3 shows the number of financial institutions for which we obtained data. Using the Moebs data, we computed weighted estimates and 95 percent confidence intervals of the percentage of institutions charging surcharge and foreign fees and weighted averages and medians of these fees. All percentage estimates presented in this report have a margin of error of +/- 5 percentage points or fewer, and all average and median estimates have a relative margin of error of +/-5 percent or less, unless otherwise noted. All differences between estimated values identified in this report are statistically significant at the 95 percent confidence level (p-value <= 0.05), unless otherwise noted. We also examined the differences between the estimated prevalence and average fees for type and size of financial institution, as well as geographic region and type of location of the financial institution separately. We did not conduct a multivariate analysis using all of these factors, control for all factors at once, or control for additional factors in our analysis. To evaluate trends in ATM fees, we adjusted the numbers for inflation to remove the effect of changes in prices. The inflation-adjusted estimates used a base year of 2012 and Consumer Price Index calendar year values as the deflator. We reviewed interviews and analysis from our previous work on bank fees to understand Moebs’ methodology for collecting the data and ensuring its integrity. In addition, we conducted reasonableness checks on the data we received and identified any missing, erroneous, or outlying data. We also worked with Moebs representatives to ensure our analysis of their data was correct. We determined that the Moebs data were reliable for the purposes of this report. Since data on ATM fees charged by independent operators were not available, we engaged the services of another market research firm— Informa Research Services (Informa)—to conduct “mystery shops” at 100 judgmentally selected independent ATMs. We selected 10 ATMs in each of the top 10 metropolitan statistical areas. In order to ensure we captured fee information from a wide variety of locations frequented by consumers on a regular basis, we directed Informa to choose ATM locations that covered the following types of stores: drug stores, grocery stores, gas stations/convenience stores, and liquor stores. We excluded ATMs at airports or casinos—locations where most consumers would not go on a regular basis and for which, according to market research, there is typically a higher fee. We also excluded ATMs at supermarket chains that were likely to have a bank branch or an ATM operated by a bank or credit union on the premises because the focus of this part of our study was ATM fees charged by independent operators. Prior to making the final selections, Informa contacted the locations and verified that the ATMs were nonbank operated and that they were in working order. Locations having bank-operated or branded ATMs or machines that were not working were replaced with locations having independent and functioning ATMs in the same neighborhood, or as close as possible. The mystery shop process involved having the shoppers go to each selected ATM and use their own debit card to conduct a transaction. The shopper documented (1) the surcharge fee amount that was posted on the ATM, (2) the surcharge fee amount that appeared on the screen after the transaction was begun, and (3) the surcharge fee amount printed on his or her receipt. The mystery shoppers entered this information into an online database. In addition, the shoppers took pictures of the ATM, screen, and their receipt, which Informa staff used to verify that shoppers correctly recorded the fee information and that the correct images were attached to the correct responses. Finally, the receipts were double checked against the location addresses to ensure that the shoppers visited the correct ATM. We then analyzed the data we obtained from Informa and computed the average surcharge fees charged to mystery shoppers for ATMs included in the sample. These data indicate what independent ATM fees were on a particular day in 2012 at those 100 ATMs and are not generalizable to the population of independent ATMs in the United States. We reviewed documentation submitted by Informa to understand their methodology for collecting the data and ensuring their integrity. We conducted reasonableness checks on the data we received and identified 10 mystery shoppers who did not report a fee printed on their ATM receipt. We instructed Informa to conduct follow up on these cases, which included checking to see if the mystery shopper used a card that was part of a surcharge-free network, and in certain cases, to send an additional mystery shopper to the ATM. We also worked with Informa representatives to ensure our analysis of their data was correct. We determined that Informa’s data were reliable for the purposes of this report. We conducted this performance audit from November 2011 to April 2013 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. This appendix presents selected results from GAO’s survey on ATM operator costs and operations. We surveyed financial institution and independent ATM operators to collect information on their business operations and models, costs, and transaction levels for calendar year 2011, and what factors they take into account when setting fees. The survey was deployed to the 10 largest banks and credit unions (by asset size) and 10 randomly selected midsize banks (with assets between $10 billion and $50 billion). We designed and deployed a separate survey to four independent ATM operators that operate 10,000 or more ATMs, and we received responses from two. We were not able to independently verify the cost information submitted by survey respondents. However, we asked the respondents to tell us what sources they used in calculating the costs they reported. They relied on sources such as internal accounting reports and third-party bills. Based on the source information provided and follow-up call with survey respondents, we determined the data they reported were sufficiently reliable for our purposes. None of the costs or operations data we collected are generalizable to ATM operators at-large. Table 4 summarizes the number and size of financial institution ATM operators that participated in our survey, as well as the number of ATMs they were operating as of December 31, 2011. We collected similar information from independent ATM operators. In addition to owning and operating their own ATMs, independent ATM firms offer a wide range of ATM-related services to merchants and other entities that own ATMs, such as monitoring and maintaining appropriate cash levels in terminals and processing transactions. There are four primary business models for independent ATM operators; therefore our survey asked for information on the number of ATMs for each model. In both the “turnkey” and “merchant-assisted” business models, the independent operator owns the ATM. In the turnkey model, the operator is responsible for most aspects of the ATM’s operations, while the merchant is responsible only for providing a place to locate the ATM and the electricity to operate it. The merchant-assisted model is similar to turnkey, but the merchant provides and loads cash into the machines, as well as provides basic maintenance. In the “merchant-owned and loaded” and “merchant cash-assisted” models, the merchant owns the ATM and is responsible for many of the operations. However, in the merchant owned- and loaded model, the merchant manages and loads cash into the ATM, while in the merchant cash-assisted model, the independent operator handles those tasks. Table 5 summarizes the number of independent ATM operators that participated in our survey, as well as the number of ATMs they were operating under each business model as December 31, 2011. Tables 6 through 10 summarize the reported number and types of calendar year 2011 ATM transactions for the financial institutions and independent ATM operators that participated in our survey. Table 11 shows the average reported per-ATM cost for financial institutions for each of the cost categories in our survey questionnaire. We are not able to present similar results from the independent operator survey since only two out of four firms responded to our questionnaire. As shown below, among the data reported by the financial institutions, there was a high level of variability across the financial institution types. For example, there are several instances in which the per-ATM average cost for one financial institution type is much higher or lower in a given category than the average of the other financial institution types. There was also variability in the number of responses for each cost category, also shown in table 11. Finally, due to the small size and nature of the sample, these results are not generalizable to the larger population of U.S. financial institutions. The descriptions of the cost categories listed below are reproduced verbatim from the survey questionnaire, and all data reported are for calendar year 2011. In addition to the analysis of estimated average ATM fees and their prevalence for 2012 that we presented in the report, we also conducted this analysis for 2007 through 2011. This appendix shows the estimated prevalence and average amounts of ATM fees based on four factors discussed in the report: type, size, location, and geographic region of the financial institution. We analyzed data from Moebs $ervices, Inc. (Moebs), a market research firm specializing in financial services data, to assess ATM fees charged by financial institution ATM operators. Moebs provided data gathered through telephone surveys for each of the years 2007 through 2012, based on statistically representative samples of financial institutions. See appendix I for more detailed information on the characteristics of the data. We examined the differences between the estimated prevalence and average fees for type and size of financial institution, as well as geographic region and type of location of the financial institution, separately. We did not conduct a multivariate analysis using all of these factors or control for any additional factors in our analysis. Dollar amounts for ATM surcharge and foreign fees in this appendix are in 2012 dollars, calculated using the Consumer Price Index calendar year values. We analyzed the prevalence of charging a surcharge and foreign fee, and then we excluded financial institutions that did not charge a fee from our calculation of the average fees. We computed weighted estimates and 95 percent confidence intervals of the percentage of institutions charging surcharge and foreign fees and weighted averages of these fees. We evaluated two types of financial institutions: banks and credit unions. Tables 12 through 15 show the variation in estimated prevalence of surcharge and foreign fees and estimated average surcharge and foreign fees for banks and credit unions from 2007 through 2012. We evaluated three sizes of financial institutions: small financial institutions with assets less than $10 million, medium financial institutions with assets between $10 million and $999 million, and large financial institutions with assets of $1 billion and more. Tables 16 through 19 show the estimated prevalence of surcharge and foreign fees and estimated average surcharge and foreign fees for small, medium, and large financial institutions from 2007 through 2012. We evaluated four locations of financial institutions: large city, rural, small city, and suburban. Tables 20 through 23 show the estimated prevalence of surcharge and foreign fees and estimated average surcharge and foreign fees for financial institutions located in large city, rural, small city, and suburban locations from 2007 through 2012. We evaluated four geographic regions of financial institutions: East, Midwest, South, and West. Tables 24 through 27 show the estimated prevalence of surcharge and foreign fees and estimated average surcharge and foreign fees for financial institutions located in the East, Midwest, South, and West from 2007 through 2012. In addition to the contact named above, Paul Schmidt (Assistant Director), Jim Ashley, Bethany M. Benitez, Katie Boggs, John Karikari, Jill Lacey, Kristeen McLain, Marc Molino, Christine San, Jennifer Schwartz, and Andrew Stavisky made significant contributions to this report.
Since the 1960s, consumers have increasingly used ATMs to easily access their accounts and conduct transactions such as cash withdrawals. Consumers may incur fees to use ATMs, such as a “surcharge” fee, which is paid to the ATM operator for transactions conducted at ATMs outside their financial institution’s network. In 2008, GAO reported that ATM surcharge fees had increased since 2000. GAO was asked to review issues related to continued increases in these fees. This report discusses (1) the business models for ATM operators and how they set ATM fees, (2) the amounts of fees that consumers incur to conduct ATM transactions and how these fees have changed over time, and (3) the reported costs of ATM operations for ATM operators and how the costs and revenues are expected to change. For this work, GAO surveyed a nongeneralizable sample of 30 financial institutions and 4 independent ATM operators to collect information on their ATM operations and costs in calendar year 2011. In addition, GAO analyzed two types of ATM fees data obtained from firms specializing in the financial services industry: (1) data on fees charged by financial institutions from 2007 to 2012 that are generalizable to all financial institutions in the United States, and (2) nongeneralizable data on fees charged by independent ATM operators that were procured by “mystery shoppers” at 100 judgmentally selected independent ATMs in 2012. GAO also interviewed industry representatives and federal regulators to understand ATM operations and requirements. Automated teller machine (ATM) operators include financial institutions--banks and credit unions--as well as independent firms. Industry representatives GAO spoke with estimate there are approximately 420,000 ATMs in the United States. They estimate that financial institutions operate and set the fees for about half of the market, and independent operators work together with merchants to operate the remainder and to determine the fees incurred by consumers. ATM operators have differing business models that affect the way they set ATM fees for consumers. Financial institutions operate ATMs as a convenience to their own account holders, who generally do not pay fees to use these ATMs, while non-account-holding customers do. At independent ATMs, most consumers incur a surcharge fee, although there are some exceptions, such as when the ATM is part of a surcharge-free ATM network. GAO estimates that the prevalence and amount of ATM surcharge fees charged by financial institutions have increased since 2007, and that the estimated average surcharge fee for financial institutions that charged a fee increased from $1.75 in 2007 to $2.10 in 2012, in 2012 dollars. In 2012, surcharge fees charged by financial institutions ranged from $0.45 to $5.00. GAO's analysis of a nongeneralizable sample of 100 ATMs run by independent operators found that the average surcharge fee was $2.24 and ranged from $1.50 to $3.00 in 2012. However, some independent ATMs may have surcharge fees that are higher or lower than those in GAO's sample. In contrast, GAO estimates that the foreign fee--the fee assessed by financial institutions for using an ATM outside the institution's network--generally stayed constant in dollar amount over this period. Consumers have many ways to obtain cash without incurring fees, such as using ATMs within their financial institution's network. Additionally, some financial institutions participate in surcharge-free networks that allow their customers free access to ATMs outside their institution's ATM network. These networks can greatly expand the number and location of ATMs available to consumers free of charge. GAO's analysis of the ATM cost data reported by a nongeneralizable sample of financial institutions it surveyed revealed some differences in the biggest cost drivers for ATM operations. For example, large banks' reported costs for hardware and software investments were higher as a percentage of their reported total ATM costs than for the midsize banks and credit unions. Key cost drivers reported by the nongeneralizable sample of independent ATM operators varied, but commonly reported costs were rent, infrastructure, and transaction processing. In addition, most of the surveyed ATM operators reported that overall per-ATM costs have increased over the past 5 years, while per-ATM revenues have declined. Many of the operators GAO contacted believe that ATM operation costs will continue to rise in the future and that revenues will be flat or decline.
Studies published by the Institute of Medicine and other organizations have indicated that fragmented, disorganized, and inaccessible clinical information adversely affects the quality of health care and compromises patient safety. In addition, long-standing problems with medical errors and inefficiencies increase costs for health care delivery in the United States. With health care spending in 2004 reaching almost $1.9 trillion, or 16 percent, of the gross domestic product, concerns about the costs of health care continue. As we reported last year, many policy makers, industry experts, and medical practitioners contend that the U.S. health care system is in a crisis. Health IT provides a promising solution to help improve patient safety and reduce inefficiencies. The expanded use of health IT has great potential to improve the quality of care, bolster the preparedness of our public health infrastructure, and save money on administrative costs. As we reported in 2003, technologies such as electronic health records and bar coding of certain human drug and biological product labels have been shown to save money and reduce medical errors. Health care organizations reported that IT contributed other benefits, such as shorter hospital stays, faster communication of test results, improved management of chronic diseases, and improved accuracy in capturing charges associated with diagnostic and procedure codes. Over the past several years, a growing number of communities have established health information exchange organizations that allow multiple health care providers, such as physicians, clinical laboratories, and emergency rooms to share patients’ electronic health information. Most of these organizations are in either the planning or early implementation phases of establishing electronic health information exchange. According to the Institute of Medicine, the federal government has a central role in shaping nearly all aspects of the health care industry as a regulator, purchaser, health care provider, and sponsor of research, education, and training. Seven major federal health care programs, such as the Centers for Medicare and Medicaid Services (CMS), DOD’s TRICARE, VA’s Veterans Health Administration, and HHS’s Indian Health Service, provide or fund health care services to approximately 115 million Americans. According to HHS, federal agencies fund more than a third of the nation’s total health care costs. Given the level of the federal government’s participation in providing health care, it has been urged to take a leadership role in driving change to improve the quality and effectiveness of medical care in the United States, including expanded adoption of IT. The programs and number of citizens who receive health care services from the federal government and the cost of these services are summarized in appendix II. In April 2004, President Bush called for the widespread adoption of interoperable electronic health records within 10 years and issued an executive order that established the position of the National Coordinator for Health Information Technology within HHS as the government official responsible for the development and execution of a strategic plan to guide the nationwide implementation of interoperable health IT in both the public and private sectors. In July 2004, HHS released The Decade of Health Information Technology: Delivering Consumer-centric and Information-rich Health Care—Framework for Strategic Action. This framework described goals for achieving nationwide interoperability of health IT and actions to be taken by both the public and private sectors in implementing a strategy. HHS’s Office of the National Coordinator for Health IT updated the framework’s goals in June 2006 and included an objective for protecting consumer privacy. It identified two specific strategies for meeting this objective—(1) support the development and implementation of appropriate privacy and security policies, practices, and standards for electronic health information exchange and (2) develop and support policies to protect against discrimination based on personal health information such as denial of medical insurance or employment. In July 2004, we testified on the benefits that effective implementation of IT can bring to the health care industry and the need for HHS to provide continued leadership, clear direction, and mechanisms to monitor progress in order to bring about measurable improvements. Since then, we have reported or testified on several occasions on HHS’s efforts to define its national strategy for health IT. We recommended that HHS develop the detailed plans and milestones needed to ensure that its goals are met, and HHS agreed with our recommendation. In our report and testimonies, we have described a number of actions that HHS, through the Office of the National Coordinator for Health IT, has taken toward accelerating the use of IT to transform the health care industry, including the development of the framework for strategic action. We described the formation of a public-private advisory body—the American Health Information Community—to advise HHS on achieving interoperability for health information exchange and four breakthrough areas the community identified—consumer empowerment, chronic care, biosurveillance, and electronic health records. Additionally, we reported that, in late 2005, HHS’s Office of the National Coordinator for Health IT awarded $42 million in contracts to address a range of issues important for developing a robust health IT infrastructure. In October 2006, HHS’s Office of the National Coordinator for Health IT awarded an additional contract to form a state-level electronic health alliance and address challenges to health information exchange, including privacy and security issues. HHS intends to use the results of the contracts and recommendations from the National Committee on Vital and Health Statistics and the American Health Information Community proceedings to define the future direction of a national strategy. The contracts are described in appendix III. We have also described the Office of the National Coordinator’s continuing efforts to work with other federal agencies to revise and refine the goals and strategies identified in its initial framework. The current draft framework—The Office of the National Coordinator: Goals, Objectives, and Strategies—identifies objectives for accomplishing each of four goals, along with 32 high-level strategies for meeting the objectives. It includes a specific objective for safeguarding consumer privacy and protecting against risks along with two strategies for meeting this objective: (1) support the development and implementation of appropriate privacy and security policies, practices, and standards for electronic health information exchange and (2) develop and support policies to protect against discrimination based on personal health information, such as denial of medical insurance or employment. According to officials with the Office of the National Coordinator, the framework will continue to evolve as the office works with other federal agencies to further refine its goals, objectives, and strategies, which are described in appendix IV. While HHS continues to refine the goals and strategies of its framework for a national health IT strategy, it has not yet defined the detailed plans and milestones needed to ensure that its goals are met, as we previously recommended. As the use of electronic health information exchange increases, so does the need to protect personal health information from inappropriate disclosure. The capacity of health information exchange organizations to store and manage a large amount of electronic health information increases the risk that a breach in security could expose the personal health information of numerous individuals. According to results of a study conducted for AARP in February 2006, Americans are concerned about the risks introduced by the use of electronic health information systems but also support the creation of a nationwide health information network. A 2005 Harris survey showed that 70 percent of Americans are concerned that an electronic medical record system could lead to sensitive medical information being exposed because of weak security, and 69 percent are concerned that such a system would lead to more personal health information being shared without patients’ knowledge. While information technology can provide the means to protect the privacy of electronically stored and exchanged health information, the increased risk of inappropriate access and disclosure raises the level of importance for adequate privacy protections and security mechanisms to be implemented in health information exchange systems. A number of federal statutes were enacted between 1970 and the early 1990s to protect individual privacy. For the most part, the inclusion of medical records in these laws was incidental to a more general purpose of protecting individual privacy in certain specified contexts. For example, the Privacy Act of 1974 was enacted to regulate the collection, maintenance, use, and dissemination of personal information by federal government agencies. It prohibits disclosure of records held by a federal agency or its contractors in a system of records without the consent or request of the individual to whom the information pertains unless the disclosure is permitted by the Privacy Act or its regulations. The Privacy Act specifically includes medical history in its definition of a record. Likewise, the Social Security Act requires the Secretary of HHS to protect beneficiaries’ records and information transmitted to or obtained by or from HHS or the Social Security Administration. Descriptions of these and other federal laws that protect health information are provided in appendix V. Federal health care reform initiatives of the early- to mid-1990s were, in part, inspired by public concern about the privacy of personal medical information as the use of health IT increased. Congress, recognizing that benefits and efficiencies could be gained by the use of information technology in health care, also recognized the need for comprehensive federal medical privacy protections and consequently passed the Health Insurance Portability and Accountability Act of 1996. This law provided for the Secretary of HHS to establish the first broadly applicable federal privacy and security protections designed to protect individual health care information. HIPAA provides for the protection of certain health information held by covered entities, defined under regulations implementing HIPAA as health plans that provide or pay for the medical care of individuals, health care providers that electronically transmit health information in connection with any of the specific transactions regulated by the statute, and health care clearinghouses that receive health information from other entities and process or facilitate the processing of that information into standard or nonstandard format for those entities. HIPAA requires the Secretary of HHS to promulgate regulatory standards to protect the privacy of certain personal health information. “Health information” is defined by the statute as any information in any medium that is created or received by a health care provider, health plan, public health authority, employer, life insurer, school or university, or health care clearinghouse and relates to the past, present, or future physical or mental health condition of an individual, provision of health care of an individual, or payment for the provision of health care of an individual. HIPAA also requires the Secretary of HHS to adopt security standards for covered entities that maintain or transmit health information to maintain reasonable and appropriate safeguards. The law requires that covered entities take certain measures to ensure the confidentiality and integrity of the information and to protect it against reasonably anticipated unauthorized use or disclosure and threats or hazards to its security. HIPAA provides authority to the Secretary to enforce these standards. The Secretary has delegated administration and enforcement of privacy standards to the department’s Office for Civil Rights and enforcement of the security standards to the department’s Centers for Medicare and Medicaid Services. Finally, most, if not all, states have statutes that in varying degrees protect the privacy of personal health information. HIPAA recognizes this and specifically provides that regulations implementing HIPAA do not preempt contrary provisions of state law if the state laws impose more stringent requirements, standards, or specifications than the federal privacy rule. In this way, HIPAA and its implementing rules establish a baseline of mandatory minimum privacy protections and define basic principles for protecting personal health information. The Secretary of HHS first issued HIPAA’s Privacy Rule in December 2000, following public notice and comment, but later modified the rule in August 2002. The Privacy Rule governs the use and disclosure of protected health information, which is generally defined as individually identifiable health information that is held or transmitted in any form or medium by a covered entity. The Privacy Rule regulates covered entities’ use and disclosure of protected health information. In general, a covered entity may not use or disclose an individual’s protected health information without the individual’s authorization. However, uses and disclosures without an individual’s authorization are permitted in specified situations, such as for treatment, payment, and health care operations and public health purposes. In addition, the Privacy Rule requires that a covered entity make reasonable efforts to use, disclose, or request only the minimum necessary protected health information to accomplish the intended purpose, with certain exceptions such as for disclosures for treatment and uses and disclosures required by law. Most covered entities must provide notice of their privacy practices. Such notice is required to contain specific elements that are set out in the regulations. Those elements include (1) a description of the uses and disclosures of protected health information the covered entity may make; (2) a statement of the covered entity’s duty with regard to the information, including protecting the individual’s privacy; (3) the individual’s rights with respect to the information, including, for example, the right to complain to HHS if he or she believes the information has been handled in violation of the law; and (4) a contact from whom individuals may obtain further information about the covered entity’s privacy policies. A covered entity is also required to account for certain disclosures of an individual’s protected health information and to provide such an accounting to those individuals on request. In general, a covered entity must account for disclosures of protected health information made for purposes other than for treatment, payment, and health care operations, such as for public health or law enforcement purposes. HIPAA’s Privacy Rule reflects basic privacy principles for ensuring the protection of personal health information. Table 1 summarizes these principles. Subsequent to the issuance of the Privacy Rule, the Secretary issued the HIPAA Security Rule in February 2003 to safeguard electronic protected health information and help ensure that covered entities have proper security controls in place to provide assurance that the information is protected from unwarranted or unintentional disclosure. The Security Rule includes administrative, physical, and technical safeguards and specific implementation instructions, some of which are required and, therefore, must be implemented by covered entities. Other implementation specifications are “addressable” and under certain conditions permit covered entities to use reasonable and appropriate alternative steps. Covered entities are required to develop policies and procedures for both required and addressable specifications. The privacy and security rules require covered entities to include provisions in contracts with business associates that mandate that business associates implement appropriate privacy and security protections. A business associate is any person or entity that performs on behalf of a covered entity any function or activity involving the use or disclosure of protected health information. The rules require covered entities to obtain through formal agreement satisfactory assurances that their business associates will appropriately safeguard protected health information. The Security Rule also contains specific requirements for business associate contracts and requires that covered entities maintain compliance policies and procedures in written form. However, covered entities are generally not liable for privacy violations of their business associates, and the Secretary of HHS does not have direct enforcement authority over business associates. HHS and its Office of the National Coordinator for Health IT have initiated actions to identify solutions for protecting health information. Specifically, HHS awarded several health IT contracts that include requirements for developing solutions that comply with federal privacy and security requirements, consulted with the National Committee on Vital and Health Statistics (NCVHS) to develop recommendations regarding privacy and confidentiality in the Nationwide Health Information Network, and formed the American Health Information Community (AHIC) Confidentiality, Privacy, and Security Workgroup to frame privacy and security policy issues and identify viable options or processes to address these issues. The Office of the National Coordinator for Health IT intends to use the results of these activities to identify technology and policy solutions for protecting personal health information as part of its continuing efforts to complete a national strategy to guide the nationwide implementation of health IT. However, HHS is in the early stages of identifying solutions for protecting personal health information and has not yet defined an overall approach for integrating its various privacy-related initiatives and for addressing key privacy principles. HHS awarded four major health IT contracts in 2005 intended to advance the nationwide exchange of health information—Privacy and Security Solutions for Interoperable Health Information Exchange, Standards Harmonization Process for Health IT, Nationwide Health Information Network Prototypes, and Compliance Certification Process for Health IT. These contracts include requirements for developing solutions that comply with federal privacy requirements and identify techniques and standards for securing health information. HHS’s contract for privacy and security solutions is intended to provide a nationwide synthesis of information to inform privacy and security policymaking at federal, state, and local levels. In summer 2006, the privacy and security solutions contractor selected 33 states and Puerto Rico as locations in which to perform assessments of organization-level privacy- and security-related policies and practices that affect interoperable electronic health information exchange and their bases, including laws and regulations. The contractor is supporting states and territories as they (1) assess variations in organization-level business policies and state laws that affect health information exchange, (2) identify and propose solutions while preserving the privacy and security requirements of applicable federal and state laws, and (3) develop detailed plans to implement solutions. The contractor is to develop a nationwide report that synthesizes and summarizes the variations identified, the proposed solutions, and the steps that states and territories are taking to implement their solutions. It is also to deliver an interim report to address policies and practices followed in nine domains of interest: (1) user and entity authentication, (2) authorization and access controls, (3) patient and provider identification to match identities, (4) information transmission security or exchange protocols (encryption, etc.), (5) information protections to prevent improper modification of records, (6) information audits that record and monitor the activity of health information systems, (7) administrative or physical security safeguards required to implement a comprehensive security platform for health IT, (8) state law restrictions about information types and classes and the solutions by which electronic personal health information can be viewed and exchanged, and (9) information use and disclosure policies that arise as health care entities share clinical health information electronically. These domains of interest address privacy principles for use and disclosure and security. The standards harmonization contract is intended to identify, among other things, security mechanisms that affect consumers’ ability to establish and manage permissions and access rights, along with consent for authorized and secure exchange, viewing, and querying of their medical information between designated caregivers and other health professionals. In May 2006, the contractor for HHS’s standards harmonization contract selected initial standards that are intended to provide security mechanisms. The initial security standards were made available for stakeholder and public comment in August and September, and the contractor’s panel voted on final standards that were presented to AHIC in October 2006. AHIC accepted the panel’s report and forwarded it to the Secretary for approval. HHS’s Nationwide Health Information Network contract requires four selected contractors to develop proposals for a nationwide health information architecture and prototypes of a nationwide health information network. The prototypes are to address privacy and security solutions, such as user authentication and access control, for interoperable health information exchange. In June 2006, HHS held its first nationwide health information network forum, at which more than 1,000 functional requirements were proposed, including nearly 180 security requirements for ensuring the privacy and confidentiality of health information exchanged within a nationwide network. The proposed functional requirements were analyzed and refined by NCVHS, and on October 30, 2006, the committee approved a draft of minimum functional requirements for the Nationwide Health Information Network, and sent it to HHS for approval. In January 2007, the four contractors are to deliver and demonstrate functional prototypes that are deployed within and across three or more health care markets and operated with live health care data using the same technology for information exchange in all three markets. HHS’s Compliance Certification Process for Health IT contract is intended to identify certification criteria for electronic health records, including security criteria. In May 2006, the Certification Commission for Health IT, which was awarded the contract, finalized initial certification criteria for ambulatory electronic health records including 32 security criteria that address components of the security principle, such as controls for limiting access to personal health information, methods for authenticating users before granting access to information, and requirements for auditing access to patients’ health records. To date, 35 electronic health records products have been certified based on these criteria. The commission is currently defining its next phase of certification criteria for inpatient electronic health records. In June 2006, NCVHS, a key national health information advisory committee, presented to the Secretary of HHS a report recommending actions regarding privacy and confidentiality in the Nationwide Health Information Network. The recommendations cover topics that are, according to the committee, central to challenges for protecting health information privacy in a national health information exchange environment. The recommendations address aspects of key privacy principles including (1) the role of individuals in making decisions about the use of their personal health information, (2) policies for controlling disclosures across a nationwide health information network, (3) regulatory issues such as jurisdiction and enforcement, (4) use of information by non- health care entities, and (5) establishing and maintaining the public trust that is needed to ensure the success of a nationwide health information network. The recommendations are being evaluated by the AHIC work groups, the Certification Commission for Health IT, Health Information Technology Standards Panel, and other HHS partners. In October 2006, the committee recommended to the Secretary of HHS that HIPAA privacy rules be extended to include other forms of health information not managed by covered entities. It also called on HHS to create policies and procedures to accurately match patients with their health records and to require functionality that allows patient or physician privacy preferences to follow records regardless of location. The committee intends to continue to update and refine its recommendations as the architecture and requirements of the network advance. AHIC, a committee that provides input and recommendations to HHS on nationwide health IT, formed the Confidentiality, Privacy, and Security Workgroup in July 2006 to frame the privacy and security policy issues relevant to all breakthrough areas and to solicit broad public input to identify viable options or processes to address these issues. The recommendations to be developed by this work group are intended to establish an initial policy framework and address issues including methods of patient identification, methods of authentication, mechanisms to ensure data integrity, methods for controlling access to personal health information, policies for breaches of personal health information confidentiality, guidelines and processes to determine appropriate secondary uses of data, and a scope of work for a long-term independent advisory body on privacy and security policies. The work group has defined two initial work areas—identity proofing and user authentication—as initial steps necessary to protect confidentiality and security. These two work areas address the security privacy principle. According to the cochairs of the work group, the members are developing work plans for completing tasks, including the definition of privacy and security policies for all of AHIC’s breakthrough areas. The work group intends to address other key principles, including, but not limited to, maintaining data integrity and control of access. It plans to address policies for breaches of confidentiality and guidelines and processes for determining appropriate secondary uses of health information, an aspect of the use and disclosure privacy principle. HHS has taken steps intended to address aspects of key privacy principles through its contracts and with advice and recommendations from its two key health IT advisory committees. Table 2 describes HHS’s current privacy-related initiatives and the key HIPAA privacy principles that they are intended to address. HHS has taken steps to identify solutions for protecting personal health information through its various privacy-related initiatives. For example, during the past 2 years HHS has defined initial criteria and procedures for certifying electronic health records, resulting in the certification of 35 IT vendor products. However, the other contracts have not yet produced final results. For example, the privacy and security solutions contractor has not yet reported its assessment of state and organizational policy variations. Additionally, HHS has not accepted or agreed to implement the recommendations made in June 2006 by the NCVHS, and the AHIC Privacy, Security, and Confidentiality Workgroup is in very early stages of efforts that are intended to result in privacy policies for nationwide health information exchange. HHS is in the early phases of identifying solutions for safeguarding personal health information exchanged through a nationwide health information network and has therefore not yet defined an approach for integrating its various efforts or for fully addressing key privacy principles. For example, milestones for integrating the results of its various privacy- related initiatives and resolving differences and inconsistencies have not been defined, nor has it been determined which entity participating in HHS’s privacy-related activities is responsible for integrating these various initiatives and the extent to which their results will address key privacy principles. Until HHS defines an integration approach and milestones for completing these steps, its overall approach for ensuring the privacy and protection of personal health information exchanged throughout a nationwide network will remain unclear. The increased use of information technology to exchange electronic health information introduces challenges to protecting individuals’ personal health information. Key challenges are understanding and resolving legal and policy issues, particularly those resulting from varying state laws and policies; ensuring appropriate disclosures of the minimum amount of health information needed; ensuring individuals’ rights to request access to and amendments of health information to ensure it is correct; and implementing adequate security measures for protecting health information. Table 3 summarizes these challenges. Health information exchange organizations bring together multiple and diverse health care providers, including physicians, pharmacies, hospitals, and clinics that may be subject to varying legal and policy requirements for protecting health information. As health information exchange expands across state lines, organizations are challenged with understanding and resolving data-sharing issues introduced by varying state privacy laws. Differing interpretations and applications of the privacy protection requirements of HIPAA and other privacy laws further complicate the ability of health information organizations to exchange data and to determine liability and enforce sanctions in cases of breach of confidentiality. Differing legal requirements for protecting health information introduce challenges to the ability to share health information among multiple stakeholders that may not be covered to the same extent by HIPAA’s privacy and security rules. Providers that are members of health information organizations are typically covered by the privacy and security requirements of HIPAA, but the information exchange organizations that provide the technology and infrastructure to conduct information exchange generally are not covered entities. Rather, they are usually thought of as business associates that are contractually bound through agreements with covered entities to provide protections to the health information that they manage but are not directly covered by the HIPAA privacy and security rules. An official with one health information exchange organization stated that he found it hard to determine if his organization was a covered entity or a business associate. In some cases, according to an official with a health information privacy professional association, health information exchange organizations may not even be business associates as defined by HIPAA. The differences between or uncertainty regarding the extent of federal privacy protection required of various organizations may affect providers’ willingness to exchange patients’ health information if they do not believe it will be protected to the same extent they protect it themselves. In June 2006, NCVHS recommended that, if necessary, HHS amend the HIPAA Privacy Rule to increase the responsibility of covered entities to control the practices of business associates. The need to reconcile differences in varying state laws’ privacy protection requirements introduces another widely acknowledged challenge to ensuring the privacy protection of health information exchanged on a nationwide basis. As health information exchange officials in states with strong privacy protections consider exchanging health information with organizations in other states, they will need to determine the extent to which they could share health information with organizations in states that have less stringent or no state-level laws and policies. For example, an official with one health information exchange organization described its state’s privacy laws as being much more stringent than federal requirements, while a health information exchange official in another state told us that HIPAA’s privacy requirements are the only laws that apply to the information exchanged by its organization. In this case, according to the official with the first organization, it would share more health information with providers in its own state than it would with providers in the other state because the other state’s less stringent privacy protection laws would not provide a sufficient level of protection. HHS recognized that sharing health information among entities in states with varying laws introduces challenges and intends to identify variations in state laws that affect privacy and security practices through the privacy and security solutions contract that it awarded in 2005. Organizations also described another challenge associated with understanding and resolving legal and policy requirements for protecting electronic health information exchanged among multiple and diverse organizations. Differing interpretations and applications of the HIPAA privacy and security rules by providers and health information exchange organizations can result in disagreement about the data that can be exchanged and with whom the data can be shared. An official with one health information exchange described differing applications of HIPAA’s security requirements that affect the way systems are administered and hinder the exchange of health information. For example, to protect individuals’ information from inappropriate disclosure, the organization requires that the systems’ list of users be forwarded to managers so that they can review roles and access rights at least annually. HIPAA’s requirements do not specify protections at this level of granularity, so other organizations may not require this level of activity. This can create disagreements between organizations about the data that can be exchanged and with whom data can be shared if one organization does not administer access rights as strictly as another. Health information exchange organizations described difficulties with determining liability and enforcing sanctions in cases of confidentiality breaches. As the number of health information exchange organizations increases and information is shared on a widespread basis, determination of liability for improper disclosure of information will become more important but also more difficult. For example, the Markle Foundation described problems with tracing the source of a privacy violation and determining the responsible entity. Without such information, it becomes very difficult, if not impossible, to enforce sanctions for violations and breaches of confidentiality. Several organizations described issues associated with ensuring appropriate disclosure, such as determining the minimum data necessary that can be disclosed in order for requesters to accomplish the intended purposes for the use of the health information. For example, dieticians and health claims processors do not need access to complete health records, whereas treating physicians generally do. According to VA officials, the agency’s ability to ensure appropriate disclosure is further complicated by the fact that the Veterans’ Benefits Act prevents disclosure of certain information, such as information related to HIV infection, sickle cell anemia, and substance abuse, which must be removed from individuals’ health records before the requested information is disclosed. Additionally, VA’s current manual process for determining the legal authority for disclosures and the minimum amount of information authorized to be disclosed is difficult to automate because of the complexity of various privacy laws and regulations. Organizations also described issues with obtaining individuals’ authorization and consent for uses and disclosures of personal health information. For example, health information exchange organizations may provide individuals with the ability to either opt in or opt out of electronic health information exchange. The opt-in approach requires that health care providers obtain the explicit permission of individuals before allowing their information to be shared with other providers. Without this permission, an individual’s personal health information would not be accessible. The opt- out approach presumes that an individual’s personal health information is available to authorized persons, but any individual may elect to not participate. Another approach taken by health information organizations simply notifies individuals that their information will be exchanged with providers throughout the organization’s network. Several organizations described difficulties with determining the best way to allow individuals to participate in and consent to electronic health information exchange. While the opt-in approach increases individual autonomy, it is more administratively burdensome than the opt-out approach and may result in fewer individuals participating in health information exchange. The opt-out approach is easier, less costly, and may result in greater participation in health information exchange, but does not provide the autonomy that the opt-in approach does. The notification approach is the simplest to administer but provides individuals no choice regarding participation in the organization’s data exchange. In June 2006, NCVHS recommended to the Secretary of HHS that the department monitor the development of opt-in and opt-out approaches; consider local, regional, and provider variations of consent options; collect evidence on the health, economic, social, and other implications of opt-in and opt-out approaches; and continue an open, transparent, and public process to evaluate whether a national policy on opting in or opting out is appropriate. Organizations also described the need to effectively educate consumers so that they understand the extent to which their consent or authorization to use and disclose health information applies. For example, one organization stated that a request made to limit use and disclosure at one facility in a network may not apply to other facilities within the same network, but consumers may assume the limitations do apply to all facilities and not take steps to limit disclosure in those other facilities. As the exchange of personal health information expands to include multiple providers and as individuals’ health records include increasing amounts of information from many sources, keeping track of the origin of specific data and ensuring that incorrect information is corrected and removed from future health information exchange could become increasingly difficult. Several organizations described challenges with ensuring that individuals have access to and the ability to amend their own health information and with ensuring that amendments are made and tracked throughout their information exchange organizations. Officials with HHS’s Indian Health Service described a challenge with ensuring that individuals’ amendments to their own health information are properly made and tracked. Additionally, as individuals amend their health information, HIPAA requires that covered entities make reasonable efforts to notify or alert and send the corrected information to certain providers and other persons that previously received the individuals’ information. Meeting this requirement was described as a challenge by officials with VA, and it is expected to become more prevalent as the numbers of organizations exchanging health information increases. Officials with DOD described difficulties with ensuring that individuals’ amendments to health information are distributed across multiple facilities within its network of medical facilities. The department is addressing this problem through the implementation of electronic health records and information management tools that track requests for amendments and their status. Additionally, an official with a professional association described the need to educate consumers to ensure that they understand their rights to request access to and amendments of their own health information to ensure that it is correct. Organizations described the adequate implementation of security measures as another challenge that must be overcome to protect health information. For example, health information exchange organizations described difficulties with determining and implementing adequate techniques for authenticating requesters of health information, such as the use of passwords and security tokens. User authentication will become more difficult as health information exchange expands across multiple organizations that employ different techniques. The AHIC Confidentiality, Privacy, and Security Workgroup recognized this difficulty and identified user authentication as one of its initial work areas for protecting confidentiality and security. Implementing proper access controls, particularly role-based access controls, was also cited as a challenge to determining the information to which requesters may have access. Several organizations stated that maintaining adequate audit trails for monitoring access to health information is difficult but is necessary to ensure that information is adequately protected. Organizations described problems introduced by the need to protect health information stored on portable devices and data transmitted between business partners. The use of laptops and other portable media by health information exchange employees presents a challenge to organizations since the data stored on these media should be encrypted to be secure. The VA is also faced with limitations related to the need to encrypt electronic health information shared with its business partners. According to VA officials, the agency and its business partners’ solutions must be compatible in order to share the encrypted data, and VA’s deployment of encryption solutions is limited. Encryption of data can be challenging, as organizations often must implement hardware and complex software technology to achieve adequate protection. As the use of health IT and the exchange of electronic health information increases, concerns about the protection of personal health information exchanged electronically within a nationwide health information network have also increased. HHS and its Office of the National Coordinator for Health IT have initiated activities that, collectively, are intended to address aspects of key privacy principles. While progress has been made through the various initiatives, HHS has not yet defined an approach and milestones for integrating its efforts, resolving differences and inconsistencies between them, and fully addressing key privacy principles. As the use of health IT and electronic information exchange networks expands, health information exchange organizations are faced with challenges to ensuring the protection of health information, including understanding and resolving legal and policy issues, ensuring that the minimum information necessary is disclosed only to those entities authorized to request the information, ensuring individuals’ rights to request access and amendments to health information, and implementing adequate security measures. These challenges are expected to become more prevalent as more information is exchanged and as electronic health information exchange expands to a nationwide basis. HHS’s current initiatives are intended to address many of these challenges. However, without a clearly defined approach that establishes milestones for integrating its efforts and fully addresses key privacy principles and these challenges, it is likely that HHS’s goal to safeguard personal health information as part of its national strategy for health IT will not be met. We recommend that the Secretary of Health and Human Services define and implement an overall approach for protecting health information as part of the strategic plan called for by the President. This approach should (1) identify milestones and the entity responsible for integrating the outcomes of its privacy-related initiatives, including the results of its four health IT contracts and recommendations from the NCVHS and AHIC advisory committees; (2) ensure that key privacy principles in HIPAA are fully addressed; and (3) address key challenges associated with legal and policy issues, disclosure of personal health information, individuals’ rights to request access and amendments to health information, and security measures for protecting health information within a nationwide exchange of health information. We received written comments on a draft of this report from HHS’s Assistant Secretary for Legislation. The Assistant Secretary disagreed with our recommendation. Throughout the comments, the Assistant Secretary referred to the department’s comprehensive and integrated approach for ensuring the privacy and security of health information within nationwide health information exchange. However, an overall approach for integrating the department’s various privacy-related initiatives has not been fully defined and implemented. We acknowledge in our report that HHS has established a strategic objective to protect consumer privacy along with two specific strategies for meeting this objective: (1) support the development and implementation of appropriate privacy and security policies, practices, and standards for electronic health information exchange, and (2) develop and support policies to protect against discrimination from health information. Our report also acknowledges the key efforts that HHS has initiated to address this objective, and HHS’s comments describe these and additional state and federal efforts. HHS stated that the department has made significant progress in integrating these efforts. While progress has been made initiating these efforts, much work remains before they are completed and the outcomes of the various efforts are integrated. Thus, we recommended that HHS define and implement a comprehensive privacy approach that includes milestones for integration, identifies the entity responsible for integrating the outcomes of its privacy-related initiatives, addresses key privacy principles, and ensures that challenges are addressed in order to meet the department’s objective to protect the privacy of health information exchanged within a nationwide health information network. HHS specifically disagreed with the need to identify milestones and stated that tightly scripted milestones would impede HHS’s processes and preclude stakeholder dialogue on the direction of important policy matters. We disagree and believe that milestones are important for setting targets for implementation and informing stakeholders of HHS’s plans and goals for protecting personal health information as part of its efforts to achieve nationwide implementation of health IT. Milestones are especially important considering the need for HHS to integrate and coordinate the many deliverables of its numerous ongoing and remaining activities. We agree that it is important for HHS to continue to actively involve both public and private sector health care stakeholders in its processes. HHS did not comment on the need to identify an entity responsible for the integration of the department’s privacy-related initiatives, nor did it provide information regarding any effort to assign responsibility for this important activity. HHS neither agreed nor disagreed that its approach should address privacy principles and challenges, but stated that the department plans to continue to work toward addressing privacy principles in HIPAA and that our report appropriately highlights efforts to address challenges encountered during electronic health information exchange. HHS stated that the department is committed to ensuring that health information is protected as part of its efforts to achieve nationwide health information exchange. HHS also disagreed with our conclusion that without a clearly defined privacy approach, it is likely that HHS’s objective to protect personal health information will not be met. We believe that an overall approach is needed to integrate the various efforts, provide assurance that HHS’s initiatives continue to address key privacy principles (as we illustrate in table 2 of the report), and to ensure that key challenges faced by health information exchange stakeholders are effectively addressed. HHS also provided technical comments that we have incorporated into the report as appropriate. HHS’s written comments are reproduced in appendix VI. In written comments, the Secretary of VA concurred with our findings, conclusions, and recommendation to the Secretary of HHS and commended our efforts to highlight methods for ensuring the privacy of electronic health information. VA also provided technical comments that we have incorporated into the report as appropriate. VA’s written comments are reproduced in appendix VII. DOD chose not to comment on a draft of this report. As agreed with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from the date on the report. At that time, we will send copies of the report to other Chairmen and Ranking Minority Members of other Senate and House committees and subcommittees that have authorization and oversight responsibilities for health information technology. We will also send copies of the report to the Secretaries of Defense, Health and Human Services, and Veterans Affairs. Copies of this report will also be made available at no charge on our Web site at www.gao.gov. If you have any questions on matters discussed in this report, please contact me at (202) 512-6240 or David Powner at (202) 512-9286, or by e- mail at koontzl@gao.gov or pownerd@gao.gov. Contact points for our offices of Congressional Relations and Public Affairs may be found on the last page of this report. Other contacts and key contributors to this report are listed in appendix VIII. The objectives of our review were to describe the steps the Department of Health and Human Services (HHS) is taking to ensure privacy protection as part of the national health information technology (IT) strategy and identify challenges associated with meeting requirements for protecting personal health information within a nationwide health information network. To address our first objective, we analyzed information that we collected from agency documentation and through discussions with officials with HHS components and advisory committees that play major roles in supporting HHS’s efforts to develop and implement a national strategy for health IT, including activities intended to ensure the protection of electronic health information exchanged within a nationwide health information network. Specifically, we reviewed and assessed privacy- related plans and documentation describing HHS’s efforts to ensure privacy protection from HHS’s Office of the National Coordinator for Health IT, Office for Civil Rights, Centers for Medicare and Medicaid Services and its Office for E-Health Standards and Services, and the Office of the Assistant Secretary for Planning and Evaluation. We also held discussions with and collected information from the American Health Information Community and the National Committee on Vital and Health Statistics, the Secretary’s two primary advisory committees for health IT. We reviewed information from the Office of the National Coordinator for Health IT on the description and status of its plans to address health information privacy as part of its national health IT strategy. We identified recommendations that the American Health Information Community and the National Committee for Vital and Health Statistics made to the Secretary of Health and Human Services regarding protecting the privacy of electronic health information. We also reviewed documentation about the scope and status of privacy-related work currently planned or being conducted under several of the Office of the National Coordinator’s health IT contracts that support its efforts to develop and implement a national health IT strategy. We reviewed procedures for enforcing privacy and security laws related to the protection of health information (i.e., the Health Information Portability and Accountability Act privacy and security rules) from the Office for Civil Rights and the Office of E-Health Standards and Services. We also reviewed involvement by HHS’s Agency for Healthcare Research and Quality, the National Institutes of Health, the Health Resources and Services Administration, the Substance Abuse and Mental Health Services Administration, and the Centers for Disease Control and Prevention in initiatives to ensure privacy protection related to the electronic exchange of health information within a nationwide health information network. We mapped the HHS privacy-related activities we identified to key privacy principles in the HIPAA Privacy Rule. We identified HHS activities that addressed specific aspects of these principles to describe the extent to which HHS’s privacy-related initiatives are intended to address key privacy principles. To address the second objective, we analyzed documentation from and held discussions with officials from the federal agencies that provide health care services—the Departments of Defense and Veterans Affairs and the Indian Health Service—and representatives from selected state-level health information exchange organizations. We selected these organizations by conducting literature research and consulting with HHS and recognized health IT professional associations to identify existing health information exchange organizations. We initially identified more than 40 organizations and then conducted screening interviews to narrow the universe to 7 state- level health information exchange organizations that were actively exchanging health information electronically. To ensure that we identified challenges introduced by both federal privacy protection requirements and requirements that are more stringent than existing federal protections, we included states that do not have state laws that supersede federal requirements and states with privacy laws that are more stringent than federal laws. We selected state-level health information organizations from California, Florida, Indiana, Louisiana, Massachusetts, North Carolina, and Utah. We also included a telehealth network from Nebraska and a community health center network from Florida to ensure that we identified any privacy-related challenges unique to their health care IT environments. During interviews, we asked the health information exchange organizations to provide examples of challenges associated with protecting the privacy of health information that they encountered with the implementation of electronic health information exchange networks, along with challenges that they anticipated would be introduced by the nationwide health information exchange being proposed by HHS. We also held discussions with HHS officials with the Agency for Healthcare Research and Quality, the National Institutes of Health, the Health Resources and Services Administration, the Substance Abuse and Mental Health Services Administration, and the Centers for Disease Control and Prevention to collect examples of challenges those organizations and their stakeholders face in attempting to address federal privacy and security requirements. To gain further insight into the challenges organizations face in protecting privacy while exchanging electronic health information, we contacted representatives from nationally recognized health IT professional organizations. We held discussions with officials from the American Health Information Management Association, the American Medical Informatics Association, the eHealth Initiative, the Healthcare Information and Management Systems Society, the Markle Foundation, and the Public Health Informatics Institute to discuss challenges these organizations faced that are associated with protecting electronic health information. We also gathered relevant information about the challenges in protecting privacy within health information exchange from officials with the Health Privacy Project, the Vanderbilt Center for Better Health, Kaiser Permanente, and NHII Advisors, a health information consulting firm. We reviewed and analyzed the information provided by the health information exchange organizations, federal health care providers, and professional associations to identify key challenges associated with the electronic exchange of personal health information throughout the health care industry. To characterize the challenges that we identified, we analyzed the specific examples of challenges and categorized them into four broad areas of challenges—understanding and resolving legal and policy issues, ensuring appropriate disclosures of health information, ensuring individuals’ rights to access and amend health information, and implementing adequate security measures for protecting health information. We conducted our work from December 2005 through November 2006 in the Washington, D.C., area in accordance with generally accepted government auditing standards. The following table includes the major federal programs that provide health care services for U.S. citizens, the number of beneficiaries for each program, and the cost of each program for 2004. The following table describes key health IT contracts awarded by the HHS Office of the National Coordinator for Health IT. The following table describes the Office of the National Coordinators’ current goals, objectives, and strategies and indicates which strategies are initiated, which are under active discussion, and which require future consideration. There are several federal statutes that protect personal health information. HIPAA provides the most extensive and specific protection. However, other federal statutes, although not always focused specifically on health information, nonetheless have the effect of protecting personal health information in specific situations. This table presents an outline of selected federal laws that protect personal health information. In addition to those named above, Mirko J. Dolak, Amanda C. Gill, Nancy E. Glover, M. Saad Khan, Charles F. Roney, Sylvia L. Shanks, Sushmita L. Srikanth, Teresa F. Tucker, and Morgan F. Walts made key contributions to this report.
The expanding implementation of health information technology (IT) and electronic health information exchange networks raises concerns regarding the extent to which the privacy of individuals' electronic health information is protected. In April 2004, President Bush called for the Department of Health and Human Services (HHS) to develop and implement a strategic plan to guide the nationwide implementation of health IT. The plan is to recommend methods to ensure the privacy of electronic health information. GAO was asked to describe HHS's efforts to ensure privacy as part of its national strategy and to identify challenges associated with protecting electronic personal health information. To do this, GAO assessed relevant HHS privacy-related initiatives and analyzed information from health information organizations. HHS and its Office of the National Coordinator for Health IT have initiated actions to identify solutions for protecting personal health information through several contracts and with two health information advisory committees. For example, in late 2005, HHS awarded several health IT contracts that include requirements for addressing the privacy of personal health information exchanged within a nationwide health information exchange network. Its privacy and security solutions contractor is to assess the organization-level privacy- and security-related policies, practices, laws, and regulations that affect interoperable health information exchange. Additionally, in June 2006, the National Committee on Vital and Health Statistics made recommendations to the Secretary of HHS on protecting the privacy of personal health information within a nationwide health information network, and in August 2006, the American Health Information Community convened a work group to address privacy and security policy issues for nationwide health information exchange. While these activities are intended to address aspects of key principles for protecting the privacy of health information, HHS is in the early stages of its efforts and has therefore not yet defined an overall approach for integrating its various privacy-related initiatives and addressing key privacy principles, nor has it defined milestones for integrating the results of these activities. GAO identified key challenges associated with protecting electronic personal health information in four areas.
Based on state responses to our survey, we estimated that nearly 617,000, or about 89 percent of the approximately 693,000 regulated tanks, had been upgraded with the federally required equipment by the end of fiscal year 2000. EPA data showed that about 70 percent of the total number of tanks that its regions regulate on tribal lands had also been upgraded. With regard to the approximately 76,000 tanks that we estimated have not been upgraded, closed, or removed as required, 17 states and the 3 EPA regions we visited reported that they believed that most of these tanks were either empty or inactive. However, another five states reported that at least half of their non-upgraded tanks were still in use. EPA and states assume that the tanks are empty or inactive and therefore pose less risk. As a result, they may give them a lower priority for resources. However, states also reported that they generally did not discover tank leaks or contamination around tanks until the empty or inactive tanks were removed from the ground during replacement or closure. Consequently, unless EPA and the states address these non-compliant tanks in a more timely manner, they may be overlooking a potential source of soil and groundwater contamination. Even though most tanks have been upgraded, we estimated from our survey data that more than 200,000 of them, or about 29 percent, were not being properly operated and maintained, increasing the risk of leaks. The extent of operations and maintenance problems varied across the states, as figure 1 illustrates. The states reported a variety of operational and maintenance problems, such as operators turning off leak detection equipment. The states also reported that the majority of problems occurred at tanks owned by small, independent businesses; non-retail and commercial companies, such as cab companies; and local governments. The states attributed these problems to a lack of training for tank owners, installers, operators, removers, and inspectors. These smaller businesses and local government operations may find it more difficult to afford adequate training, especially given the high turnover rates among tank staff, or may give training a lower priority. Almost all of the states reported a need for additional resources to keep their own inspectors and program staff trained, and 41 states requested additional technical assistance from the federal government to provide such training. To date, EPA has provided states with a number of training sessions and helpful tools, such as operation and maintenance checklists and guidelines. One of EPA’s tank program initiatives is also intended to improve training and tank compliance with federal requirements, such as setting annual compliance targets with the states. At the time of our review, the Agency was just beginning to work out the details of how it will implement this initiative and had set up a working group of state and EPA representatives to begin work on compliance targets. According to EPA’s program managers, only physical inspections can confirm whether tanks have been upgraded and are being properly operated and maintained. However, only 19 states physically inspect all of their tanks at least once every 3 years—the minimum that EPA considers necessary for effective tank monitoring. Another 10 states inspect all tanks, but less frequently. The remaining 22 states do not inspect all tanks, but instead generally target inspections to potentially problematic tanks, such as those close to drinking water sources. In addition, not all of EPA’s own regions comply with the recommended rate. Two of the three regions that we visited inspected tanks located on tribal land every 3 years. Figure 2 illustrates the states’ reported inspection practices. According to our survey results, some states and EPA regions would need additional staff to conduct more frequent inspections. For example, under staffing levels at the time of our review, the inspectors in 11 states would each have to visit more than 300 facilities a year to cover all tanks at least once every 3 years, but EPA estimates that a qualified inspector can only visit at most 200 facilities a year. Moreover, because most states use their own employees to conduct inspections, state legislatures would need to provide them additional hiring authority and funding to acquire more inspectors. Officials in 40 states said that they would support a federal mandate requiring states to periodically inspect all tanks, in part because they expect that such a mandate would provide them needed leverage to obtain the requisite inspection staff and funding from their state legislatures. In addition to more frequent inspections, a number of states stated that they need additional enforcement tools to correct problem tanks. EPA’s program managers stated that good enforcement requires a variety of tools, including the ability to issue citations or fines. One of the most effective tools is the ability to prohibit suppliers from delivering fuel to stations with problem tanks. However, as figure 3 illustrates, 27 states reported that they did not have the authority to stop deliveries. In addition, EPA believes, and we agree, that the law governing the tank program does not give the Agency clear authority to regulate fuel suppliers and therefore prohibit their deliveries. Almost all of the states said they need additional enforcement resources and 27 need additional authority. Members of both an expert panel and an industry group, which EPA convened to help it assess the tank program, likewise saw the need for states to have more resources and more uniform and consistent enforcement across states, including the authority to prohibit fuel deliveries. They further noted that the fear of being shut down would provide owners and operators a greater incentive to comply with federal requirements. Under its tank initiatives, EPA has said that it will attempt to obtain state commitments to increase its inspection and enforcement activities, or it may supplement state activities in some cases. EPA’s regions have the opportunity, to some extent, to use the grants that they provide to the states for their tank programs as a means to encourage more inspections and better enforcement. However, the Agency does not want to limit state funding to the point where this further jeopardizes program implementation. The Congress may also wish to consider making more funds available to states to improve tank inspections and enforcement. For example, the Congress could increase the amount of funds it provides from the Leaking Underground Storage Tank trust fund, which the Congress established to specifically provide funds for cleaning up contamination from tanks. The Congress could then allow states to spend a portion of these funds on inspections and enforcement. It has considered taking this action in the past, and 40 states said that they would welcome such funding flexibility. In fiscal year 2000, EPA and the states confirmed a total of more than 14,500 leaks or releases from regulated tanks, although the Agency and many of the states could not verify whether the releases had occurred before or after the tanks had been upgraded. According to our survey, 14 states said that they had traced newly discovered leaks or releases that year to upgraded tanks, while another 17 states said they seldom or never detected such leaks. The remaining 20 states could not confirm whether or not their upgraded tanks leaked. EPA recognizes the need to collect better data to determine the extent and cause of leaks from upgraded tanks, the effectiveness of the current equipment, and if there is a need to strengthen existing equipment standards. The Agency has launched studies in several of its regions to obtain such data, but it may have trouble concluding whether leaks occurred after the upgrades. In a study of local tanks, researchers in Santa Clara County, California, concluded that upgraded tanks do not provide complete protection against leaks, and even properly operated and maintained tank monitoring systems cannot guarantee that leaks are detected. EPA, as one of its program initiatives, plans to undertake a nationwide effort to assess the adequacy of existing equipment requirements to prevent leaks and releases and if there is a need to strengthen these requirements, such as requiring double-walled tanks. The states and the industry and expert groups support EPA’s actions. In closing, the states and EPA cannot ensure that all regulated tanks have the required equipment to prevent health risks from fuel leaks, spills, and overfills or that tanks are safely operated and maintained. Many states are not inspecting all of their tanks to make sure that they do not leak, nor can they prohibit fuel from being delivered to problem tanks. EPA has the opportunity to help its regions and states correct these limitations through its tank initiatives, but it is difficult to determine whether the Agency’s proposed actions will be sufficient because it is just defining its implementation plans. The Congress also has the opportunity to help provide EPA and the states the additional inspection and enforcement authority and resources they need to improve tank compliance and safety. Therefore, to better ensure that underground storage tanks meet federal requirements to prevent contamination that poses health risks, we have recommended to the Administrator, EPA, that the Agency 1. work with the states to address the remaining non-upgraded tanks, such as reviewing available information to determine those that pose the greatest risks and setting up timetables to remove or close these tanks, 2. supplement the training support it has provided to date by having each region work with each of the states in its jurisdiction to determine specific training needs and tailored ways to meet them, 3. negotiate with each state to reach a minimum frequency for physical inspections of all its tanks, and 4. present to the Congress an estimate of the total additional resources the Agency and states need to conduct the training, inspection, and enforcement actions necessary to ensure tank compliance with federal requirements. In addition, the Congress may want to consider EPA’s estimate of resource needs and determine whether to increase the resources it provides for the program. For example, one way would be to increase the amount of funds it appropriates from the trust fund and allow states to spend a limited portion on training, inspection, and enforcement activities, as long as cleanups are not delayed. The Congress may also want to (1) authorize EPA to require physical inspections of all tanks on a periodic basis, (2) authorize EPA to prohibit fuel deliveries to tanks that do not comply with federal requirements, and (3) require that states have similar authority to prohibit fuel deliveries. For further information, please contact John Stephenson at (202) 512-3841. Individuals making key contributions to this testimony were Fran Featherston, Rich Johnson, Eileen Larence, Gerald Laudermilk, and Jonathan McMurray.
Contaminated soil or water resulting from leaks at underground storage tanks can pose serious health risks. In 1984, Congress created the Underground Storage Tank (UST) program to protect the public from potential leaks. Under the program, the Environmental Protection Agency required tank owners to install new leak detection equipment and new spill-, overfill-, and corrosion-prevention equipment. GAO found that about 1.5 million tanks have been permanently closed since the program was created, but more than half of the states do not inspect all of their tanks often enough to meet the minimum rate recommended by EPA--at least once every three years. States reported that even tanks with the required leak prevention and detection equipment continue to leak, although the full extent of the problem is unknown.
U.S. citizens residing abroad are generally subject to the same filing requirements as citizens residing in the United States. In particular, section 6012 of the Internal Revenue Code (IRC) requires individuals to file tax returns if they meet certain gross income thresholds, regardless of whether or not they owe taxes. Individuals residing abroad must file tax returns even if they think their income is exempt from tax under the foreign earned income and housing expense exclusions. Without a return, IRS cannot verify a taxpayer’s interpretation of the rules limiting eligibility for the exclusions. Under IRC section 911, U.S. citizens or resident aliens may qualify to exclude up to $70,000 per year of their foreign earned income through 1997, and an additional amount based on their housing expenses if they meet certain foreign residency requirements. Nonfilers detected by IRS before filing voluntarily lose their eligibility for the exclusions in some circumstances. (See app. I for additional information on the exclusions and related rules affecting U.S. citizens residing abroad.) IRS’ Office of the Assistant Commissioner (International)—AC (International) is responsible for all international tax matters. To support its mission, AC (International) maintains about 13 full-time personnel at 9 foreign posts of duty. Additionally, some staff who are normally based in the United States are available for temporary tours of duty in foreign countries. We have responded to two earlier congressional inquiries into nonfiling by U.S. citizens residing abroad. In a 1985 testimony, we noted that our analysis of filing among a limited sample of U.S. citizens in selected countries indicated a potential nonfiling problem. As a result, Congress enacted IRC section 6039E: Information Concerning Resident Status in the Tax Reform Act of 1986. This section includes provisions requiring U.S. citizens applying for passports to provide their Social Security number (SSN), any foreign country of residence, and other information that might be prescribed by the Treasury Department. The intent of section 6039E was that IRS would use this information to identify nonfilers residing abroad. In May 1993, we reported on IRS’ relevant compliance initiatives, the lack of reliable data on U.S. citizens abroad, and IRS’ limited use of passport application data as a compliance tool. To explore the possibility of estimating the prevalence of nonfiling abroad, we obtained State Department and foreign government estimates of U.S. citizens abroad and IRS data on returns filed from abroad. We also obtained Social Security Administration data on the number of Social Security beneficiaries and Office of Personnel Management (OPM) data on the number of federal and military retirees residing abroad. We looked at data on the number of nonfilers abroad identified through IRS’ information matching program. And, we attempted to use data IRS received from the State Department to assess the extent of filing among recent passport applicants who cited foreign addresses. The details of our scope and methodology for this objective are discussed in appendix II. Estimating the revenue impact of nonfiling requires information on the average tax liability of nonfilers in addition to an estimate of prevalence. We identified little data bearing on the tax that nonfilers abroad would owe if they were to file. However, we did obtain the average tax owed by those who file from abroad and the taxes assessed in audits of nonfilers detected by IRS, but neither can be reliably projected to nonfilers abroad in general. To identify the factors that may limit IRS’ enforcement of the filing requirement or otherwise contribute to nonfiling abroad, we talked with responsible officials in AC (International) and the nonfiler program under AC (Collection) regarding relevant compliance information and programs and their limitations. We obtained IRS data summarizing the results of its information matching and audit programs for individual taxpayers abroad. We also reviewed relevant sections of the tax code and IRS regulations and obtained general information on the enforcement tools available to IRS through U.S. tax treaties or administrative agreements with other nations. To describe IRS’ recent initiatives to address nonfiling abroad, we talked to responsible officials in AC (International) and obtained documentation describing the initiatives they cited. We also talked to them about the status of initiatives under way when we issued our 1993 report. To contrast the Treasury study of noncompliance abroad with our study, we reviewed its report in light of the information we gathered in this review. We also contacted Treasury Department and IRS officials to clarify our understanding of the report. We conducted our review from October 1997 through April 1998 in accordance with generally accepted government auditing standards. We requested comments from IRS, the Treasury Department, and the State Department and their oral comments are discussed at the end of this report. U.S. citizens, regardless of where they reside, are generally required to file income tax returns. Thus, U.S. citizens abroad who exceed certain annual income thresholds are generally required to file tax returns. Estimates of the numbers of citizens who are required to file and those who did not could possibly be made if there were reliable data on the total U.S. population residing abroad, related demographic characteristics, and the number of returns they filed. However, the data we obtained on the U.S. population residing abroad—from State Department and foreign government estimates—and the number of returns they filed are too uncertain to support such estimates. We did obtain some information concerning nonfiling abroad from a recent IRS compliance project and IRS’ information matching results. The information is not definitive, but it does indicate that there was a serious nonfiling problem in one region of the world (the Middle East) in the early 1990s and that nonfiling could be relatively prevalent abroad, compared with the general U.S. population, among higher income taxpayers who are covered by information reporting. We also attempted to determine if the prevalence of nonfiling abroad could be estimated by using passport application data IRS receives from the State Department. These data were not useful, however, because many applicants did not provide an SSN on their passport applications, as required by IRC section 6039E. Generally, it is difficult for IRS to match taxpayer information against its database of filed tax returns without a valid SSN or other identification number. Given the limitations of available data, the total revenue impact of nonfiling abroad cannot be reliably estimated. Estimating revenue impact would require reliable information concerning the number of U.S. citizens residing abroad, the number who would be required to file tax returns, the extent of nonfiling, and the amount of tax nonfilers would owe if they were to file. IRS’ most recent estimate of the revenue lost to individual nonfilers residing in the United States—$13.8 billion in 1992—illustrates the difficulty in deriving reliable estimates of the revenue losses attributable to nonfiling. According to an official in IRS’ research division, (1) the estimate is limited to nonfilers residing in the United States and incorporates assumptions, necessitated by data limitations, about taxes owed by nonfilers who could not be identified or located; and (2) the statistical reliability of the estimate has not been quantified. The State Department estimated the total population of U.S. citizens residing abroad at about 3.1 million in 1995, excluding active military and current government personnel. This number was based on estimates derived by 221 U.S. embassies and consulates, does not include demographic breakdowns, and is not meant to be statistically reliable. The posts’ estimates are intended only as rough population indicators to be used in evacuation planning. Officials at the 18 U.S. embassies and consulates contacted during our review reported that they used various sources of information in deriving their estimates, such as data on the number of U.S. citizens renewing passports or voluntarily registering at the post or data obtained from the host country. Data limitations required the posts to use subjective judgment in deriving the estimates. For example, posts attempted to adjust their estimates to account for certain limitations in the registration data, e.g., eight posts estimated that the majority of U.S. citizens residing in their jurisdictions were not registered. Also, some who do register may remain on file even after they leave the country. Many foreign governments collect data on the nationality of their residents, sometimes by age group, including the number who are U.S. citizens. The foreign data are not comparable with the State Department data because of differences in how U.S. citizens are defined. For example, the estimates from many of the U.S. embassies and consulates we contacted included U.S. citizens who are dual nationals, particularly individuals who were born abroad but acquired U.S. citizenship by virtue of a parent’s citizenship; while some of the foreign estimates we obtained did not count such individuals as U.S. citizens. The foreign estimates we obtained are also not comparable across countries; for example, some countries count their resident aliens based on country of birth and others based on citizenship. The latter approach would include some naturalized citizens not born in the United States. Different countries obtain their estimates in different ways. For example, some countries rely on census counts of individuals intending to reside in the country for a certain time, while others use data on immigrants granted permanent residence status, and some countries exclude U.S. citizens in certain age categories. Also, different estimates for the same country can vary widely, and it is not always clear who is being counted. For example, a 1991 Italian Census report noted 15,031 U.S. citizens residing in Italy while Eurostat counted 62,066 in 1993. Given the limited methodological descriptions in the reports we obtained and the translation difficulty, we could not determine exactly how the U.S. population was defined in these cases. Further, data from the Social Security Administration indicated that about 14,000 U.S. citizens resided in Italy and received U.S. Social Security benefits in 1996. We did not contact foreign government officials about the reliability of their data on U.S. citizen populations because of resource constraints and because limitations in IRS’ data on returns filed from abroad, discussed below, could limit the usefulness of country-specific data. Analysts in the U.S. Census Bureau’s International Program Center told us that data from foreign censuses in developed countries are generally reliable. However, the Census officials were not specifically knowledgeable about foreign estimates of U.S. citizens residing abroad. IRS classifies individual tax returns as being “international” if the return cites a foreign mailing address or includes a Form 2555 claiming the foreign earned income or housing expense exclusions. Returns reporting amounts in foreign denominations or attaching foreign earnings reports are also classified as international returns. However, these data are of uncertain reliability as an indicator of total returns filed by U.S. citizens residing abroad. IRS’ classification generally has not captured returns filed by individuals who lived abroad during the tax year but cited a domestic address on their return and did not claim the exclusions. IRS, too, has found that its computer system continued to classify some individuals as international filers even after the tax year when they returned to the United States. The reliability of IRS’ data on returns filed from a particular country is further limited because IRS’ data do not track the filer’s country of residence in some cases. And IRS’ data on returns filed include, but do not distinguish from other returns, returns from permanent resident aliens of the United States who are living abroad. These individuals are not U.S. citizens and therefore would not be included in the State Department or foreign government estimates of the U.S. population abroad. Table 1 summarizes data available on U.S. citizens abroad and returns filed from abroad in tax year 1995 in total and for the seven countries in which State Department estimates indicated more than 100,000 U.S. citizens reside. The table illustrates the variations in available estimates of the U.S. population abroad and the lack of comparable data across countries. We note the number of tax returns filed from a particular country as “unknown” because a large percentage of the returns received from abroad are not differentiated by country in IRS’ database. The above data, even if reliable, would not provide the number or proportion of actual nonfilers abroad because the number of individuals required to file is unknown. We explored whether the number of nonfilers abroad—those who are required to file but do not—might be roughly estimated by using the ratio of total individual returns filed to total U.S. population, about 0.45 in recent years, as a benchmark. In particular, a ratio of returns filed from abroad to U.S. population abroad that is much smaller than 0.45 might indicate proportionately more nonfilers in the population abroad than in the general U.S. population. However, available data on the U.S. population abroad and the number of returns they file is too uncertain to allow a reliable comparison with the general population. Such an analysis would also require data on how characteristics related to the filing requirement compare in the two populations, particularly the age and income distributions. We identified two other sources of information that, while not definitive or indicative of the overall extent of the problem, imply that nonfiling may be a problem in certain segments of the U.S. population abroad. IRS estimates that its Mideast compliance project, described in more detail later in this report, was largely responsible for a 51-percent increase in returns filed by U.S. citizens residing in the region. IRS does not know whether those results reflect that nonfiling was more or less prevalent among U.S. citizens residing in Mideast countries compared with other areas of the world. The region’s representativeness depends in part on how it compares with other parts of the world in terms of the number of U.S. citizens employed there by foreign corporations. Most of the nonfilers IRS identified in the Middle East worked for foreign companies, which do not participate in U.S. information reporting or tax withholding. In general, IRS has found much higher rates of noncompliance among individuals not covered by these systems. IRS data on nonfilers identified through its information matching program, which we did not verify, indicate that nonfiling among those who have relatively high incomes and are covered by information reporting may be more common among U.S. citizens abroad than in the U.S. population generally. IRS relies on an automated system to select the potential nonfiler cases identified in its information matching program that may warrant subsequent enforcement action. IRS’ system identified 21,852 individuals classified as residing abroad who were potential nonfilers for tax year 1995 and had sufficient income reported on information returns or met other criteria that cause IRS to issue a delinquency notice. Using the same criteria, the system selected about 1.9 million individuals from the total U.S. population for the same year. Compared with the number of returns that were filed—about 935,000 returns classified as filed from abroad in 1995 versus 118 million filed from the general population—the number of potential nonfilers abroad who were selected to receive notices was about 40-percent larger, proportionately, than the number identified in the general U.S. population. We obtained passport application data to determine if they could be matched against IRS’ database of SSNs from filed tax returns to help estimate the number of U.S. citizens residing abroad who did not file tax returns. The data include an applicant’s date of birth, which might be useful in identifying adults who are more likely than children to meet the filing requirement. However, many of the recent passport records IRS received from the State Department did not include SSNs and so could not be readily matched against IRS’ database. As a result, we could not reliably estimate the number or proportion of passport applicants who did not file tax returns. We analyzed 303,000 passport records that listed foreign mailing addresses and were processed by the State Department in the last half of 1995 and throughout 1996. About 133,000, or 44 percent of these records, did not contain SSNs and could not be readily matched. Of about 170,000 records that did contain SSNs, the proportion of individuals not filing returns, as either primary filers or secondary filers on a joint return, did not differ dramatically from the comparable proportion for the general U.S. population. In particular, for tax year 1994, 41 percent of the applicants did not file compared with 37 percent not filing from the general population. For tax year 1995, 35 percent of the applicants did not file compared with 36 percent in the general population. However, the large number of applications without SSNs preclude reliable estimation of the percentage of the total population of passport applicants residing abroad who did not file tax returns. (Detailed results related to the passport data analysis are provided in app. III.) The revenue impact of nonfiling abroad cannot be estimated, primarily because the prevalence of nonfiling and the income levels of the nonfilers are unknown. The impact could be relatively small or substantial, depending on the assumptions used in the analysis. If it were assumed that the U.S. population abroad contains more children and low-income individuals than the general U.S. population, the potential number of nonfilers abroad and the resulting revenue impact may be small. Assuming that the foreign earned income and housing expense exclusions and foreign tax credit would generally eliminate much of a nonfiler’s tax liability would also tend to minimize the revenue impact. By contrast, assuming that the State Department’s estimate of the U.S. population abroad is generally accurate and the population does not contain proportionately more children and low-income individuals could imply a potentially large number of nonfilers abroad. There could be a substantial revenue impact if these nonfilers have income characteristics similar to those who do file from abroad. In 1995, individuals filing from abroad, excluding military personnel and nonresident aliens, had an average income tax liability of about $6,700 despite available exclusions and credits. Assuming that IRS’ tax assessments against nonfilers that are identified represent the amounts owed by those not identified would also suggest a relatively large potential revenue impact. IRS assessed an average tax of $22,057 on 1,237 nonfilers residing abroad who were audited in fiscal years 1995 and 1996. It should be noted, however, that IRS generally focuses its enforcement efforts on nonfilers thought to have the highest incomes and largest unpaid tax liabilities. Further, IRS generally does not consider the effect of the foreign income exclusions or foreign tax credits in making the assessments. However, the foreign earned income and housing expense exclusions that could effectively lower overall tax liability are not necessarily extended to certain nonfilers. IRS’ enforcement of the filing requirement abroad is impeded by the limited reach of U.S. law in foreign countries. In particular, IRS has no authority to require tax withholding or information reporting from foreign employers and little ability to enforce collection if a taxpayer’s assets have been transferred abroad. IRS’ enforcement abroad may be further hampered by its limited use of the information that is available, particularly the passport application records it receives from the State Department. Also, IRS’ filing instructions for individuals may lead some U.S. citizens residing abroad to erroneously conclude that they do not need to file tax returns. Information reporting and tax withholding from employers and other income providers are the key tools available to IRS for identifying nonfilers and reducing the resulting lost revenue, but they have limited applicability to U.S. citizens residing abroad who are employed by foreign companies or derive investment income from foreign sources. IRS’ tax-gap estimates indicate that those covered by information reporting and tax withholding pay a far greater share of their true tax liabilities than those who are not subject to them. U.S. citizens residing abroad have generally not been subject to tax withholding on income earned from foreign employers or foreign investments, and IRS receives little third-party information on such income. U.S. citizens working abroad for U.S. employers are covered by withholding and information reporting, and IRS uses this information in its matching program to identify some nonfilers abroad. In recent years, IRS has routinely received information on the foreign source income of U.S. citizens only from 19 of the countries with which the United States has information exchange agreements or tax treaties. Even in those countries, the information is limited to whatever is collected under a foreign country’s own tax system. Most information received from foreign countries pertains to the investment income of individuals residing in the United States, while only 731 of about 302,000 foreign information documents processed for tax year 1993 pertained to the earned income of U.S. citizens employed abroad by foreign companies. IRS officials believe that foreign employers and financial institutions generally have not identified U.S. citizens who reside abroad or noted their citizenship on information returns. Additionally, IRS has had difficulty processing and matching foreign information returns due to computer system limitations and because most foreign returns do not include the taxpayer’s SSN or are received too late to be processed as part of IRS’ information matching program. IRS noted that it may receive some additional information on U.S. citizens abroad through Qualified Intermediary Agreements with foreign financial institutions beginning in tax year 2000. Qualified Intermediary Agreements, introduced by IRS regulations under IRC section 1441, generally relate to U.S. withholding by foreign financial institutions on U.S. source income paid to foreign persons; but, IRS expects the agreements will also require the foreign institutions to report certain information on U.S. citizens. The mechanisms provided to IRS under U.S. law for collecting unpaid taxes, including liens, levies, and seizures, generally cannot be applied against assets that have been transferred to a foreign country. As a result, IRS generally cannot collect unpaid taxes from assets that have been transferred to a foreign country, except for the five countries that have entered into mutual collection assistance agreements as part of tax treaties with the United States—Canada, France, Denmark, Sweden, and the Netherlands. Mutual collection assistance agreements generally provide for each country to use measures available within its own legal system to collect taxes owed to its partner in the agreement. The agreement with Canada was ratified in 1995, and the others were ratified between 1939 and 1948. According to IRS documentation on the program’s evolution, the 47-year hiatus between the last two agreements occurred because the Senate indicated in 1948 that it did not favor additional agreements of this type. IRC section 6039E was enacted in 1986 to provide IRS with data from passport applications processed by the State Department for use in identifying individuals residing abroad who do not file tax returns. The law required passport applicants to provide their SSNs, foreign country of residence, and other information to be prescribed by Treasury, and established a penalty of $500 for each failure to provide the required information. However, IRS has made little use of passport application data in identifying potential nonfilers abroad, and some application records are difficult to use because they lack SSNs, as noted previously. Also, the State Department does not capture the country of residence of some passport applicants who reside abroad, and IRS has not prescribed occupation data among the items it requires from passport applicants. Passport applications contain no income information for directly identifying nonfilers, but they do contain age and occupation data, which could help IRS identify individuals who are likely to have gross incomes above the filing thresholds. Passport data are included in IRS’ matching program, but have rarely been used to identify potential nonfilers abroad. The criteria IRS used in recent years to select potential nonfilers to be contacted emphasized the total amount of income reported on information returns. One low-priority criterion applied to mismatches where IRS received passport or green-card records, but no corresponding tax return. However, only 21 of 21,852 potential nonfilers abroad selected to receive delinquency notices in 1995 were selected based on that criterion. And most of the passport records IRS received from the State Department cited U.S. rather than foreign mailing addresses. Applications that cite foreign mailing addresses are not flagged or analyzed separately in IRS’ returns matching program. IRS officials said that in the future they plan to obtain passport data routinely only for those applicants who cite foreign mailing addresses. IRS expects that this will reduce the cost of obtaining the data and make it easier to use in identifying nonfilers abroad. IRS has not attempted to penalize passport applicants in recent years for failure to provide their SSNs. As previously noted, IRS has difficulty matching records that do not contain SSNs. IRS officials believe the penalty program was dropped in 1993 because IRS had difficulty determining the SSNs of applicants who did not furnish one on the application. At that time, IRS generally did not send inquiries or penalty notices for missing SSNs unless the individual’s SSN could be determined from another source. IRS officials said that it is administratively difficult to track penalty cases without taxpayers’ SSNs, but there is currently no rule that requires them to obtain the applicant’s SSN before inquiring about missing information. The officials said they are exploring ways of reinstating the penalty program, possibly by sending correspondence to the mailing address cited on the application without attempting to determine the applicant’s SSN from another source. Passport application forms include a statement noting that an SSN must be provided if the applicant has received one, subject to a $500 penalty. However, the State Department does not deny passports to applicants who do not provide an SSN, as it relies on other proofs of an applicant’s citizenship. Whether it could do so is unclear. Denying a passport to a U.S. citizen for failure to provide an SSN could raise a constitutional issue, based on our review of relevant court cases. In particular, the Supreme Court held that the right to travel is a fundamental liberty and government restrictions on it must conform to the due process provisions of the 5th amendment. IRS has not collected complete information on the country of residence and has not obtained occupation data on passport applicants residing abroad.The data IRS has received has been limited to the applicant’s name, mailing address, date of birth, and SSN if the applicant provided one. The applicant’s country of residence is currently not required on passport applications. According to State Department and IRS officials, country of residence can be obtained in some cases from mailing addresses on passport applications, primarily when a U.S. citizen residing abroad applies for a passport renewal, or when U.S. citizens born abroad apply for passports, although applicants are not required to cite a foreign address even in these cases. Passport application forms do not contain a field for capturing the country of residence of those applying for a passport in this country and intending to live or work abroad. Passport applications do contain a field for the applicant’s occupation, but IRS has not obtained this information routinely or prescribed that applicants provide it. According to State Department officials, the cost of capturing occupation data would include data transcription costs of about 6 cents per record and other costs to revise the computer programs used to store and retrieve the data. State Department officials also believe that the passport application form would need to be revised to capture the country of residence and to provide additional instructions to the applicant. The officials said that they have not estimated the cost of modifying the relevant computer programs or revising the application form. IRS officials noted that certain IRS computer programs would also need to be modified to process the additional data, and, based on a preliminary estimate, this could require the equivalent of about 2 staff years at the GS-12/13 level and $10,000 for related equipment and software upgrades. IRS proposed regulations on section 6039E in 1993 that would have required applicants to provide their country of residence, address within the country of residence, occupation, and other information. The Office of Chief Counsel is working to finalize the regulations in 1998. An official in IRS’ Office of Chief Counsel said that one reason the proposed regulations were not finalized earlier is that section 6039E already provides IRS with the authority to prescribe the information required from passport applicants without specifying the requirements in regulations. In the 1960s and 1970s, U.S. citizens residing abroad and applying for passports or registering at U.S. consular posts abroad were asked to complete an IRS Form 3966: Identification of U.S. Citizen Residing Abroad. U.S. citizens were asked to voluntarily provide their foreign mailing address, occupation, date of last filed tax return, and other identifying information. When they learned that completing the form was voluntary, many citizens declined to do so. For this reason, and because some complained that the form constituted an invasion of their privacy, IRS discontinued the form in 1979. IRS’ instructions for Form 1040 and related guidance may contribute to misinterpretation of the filing requirement among individuals who think they qualify for the foreign earned income or housing expense exclusions. The instructions state that only gross income that “is not exempt from tax” should be considered in determining whether the filing threshold is met. However, income qualifying for the foreign earned income or housing exclusions must be included in applying the threshold, as is clarified in Publication 54: Tax Guide for U.S. Citizens and Resident Aliens Abroad, even though the income is “exempt from tax” under section 911. IRS generally revises its instructions and publications annually to reflect statutory changes and to clarify potentially confusing language. IRS has initiated some actions in recent years to improve filing compliance abroad, but has not yet developed global information on the prevalence or impact of the problem or the countries where the problem may be particularly severe. In particular, IRS initiated a multiyear compliance project in 1991 aimed at U.S. citizens working in the Middle East. IRS believes that the project resulted in the recovery of a substantial amount of tax revenue, and is now attempting to gather foreign census and other demographic data that might reveal other concentrations of nonfilers abroad with tax liabilities. IRS officials cited several other recent or ongoing projects focused on compliance problems other than nonfiling among certain categories of U.S. citizens residing abroad, such as one on nonreporting of scholarship and grant income among those studying or teaching abroad and another on highly paid executives claiming tax deferrals on nonqualified foreign pension plans. IRS estimates that the Mideast project was largely responsible for a 51-percent increase in the number of returns filed from the region—from 13,686 in 1991 to 20,647 in 1995. IRS also estimated that the increased returns filed from Saudi Arabia from 1992 through 1995 resulted in a total revenue increase of about $76 million. The project was initiated late in 1991 after IRS noticed that many civilians who returned to the United States during Operation Desert Storm filed tax returns for the first time in years. Also, IRS believed that the potential increase in tax revenue would justify the compliance resources expended because these countries had no income tax. U.S. taxpayers in these countries therefore could not reduce their tax liabilities by claiming foreign tax credits. Revenue agents and other personnel from AC (International) traveled to the region to conduct informational seminars for U.S. individuals concerning their tax filing obligations and possible adverse consequences from not filing, such as losing eligibility for the foreign earned income and housing expense exclusions under Treasury Regulations section 1.911-7. The seminars were focused on companies employing a large number of U.S. citizens, which IRS identified through the financial news media and information obtained from the Department of State, the Department of Labor, and other sources. One foreign employer of about 5,000 U.S. citizens agreed to provide IRS with information on its U.S. employees’ income as requested on a case-by-case basis and also issued a letter to its U.S. employees outlining their need to file and pay U.S. taxes. Also as part of this project, IRS mailed delinquency letters to all potential nonfilers in selected locations, including a warning that they could lose their right to claim the foreign earned income and housing exclusions if they did not file voluntarily. IRS generally sends such delinquency notices only to potential nonfilers meeting certain selection criteria based on the amount of income reported on information returns and other factors. IRS did not know, at the time of our review, whether other geographical areas could offer compliance improvement opportunities, particularly for increased filing of required tax returns, similar to or greater than those discovered in its Mideast effort. Early in fiscal year 1997, IRS began a project to identify countries or regions where additional compliance projects similar to the Mideast project might be warranted. The project is attempting to obtain demographic data on the number, location, age stratification, and likely income levels of U.S. citizens residing abroad. IRS’ sources of information for the project include its own data on returns filed, population estimates from foreign governments, and data from the Social Security Administration and OPM on the number of Social Security beneficiaries and federal retirees residing abroad. IRS had obtained at least some foreign data from 10 countries as of December 1997, including some relatively detailed demographic information obtained directly from foreign governments. However, IRS had not obtained data from Canada, Mexico, the United Kingdom, Israel, Germany, Italy or the Phillipines—the countries where, in each case, more than 100,000 U.S. citizens resided in 1995, according to State Department estimates. IRS expects to obtain and analyze data for the countries accounting for about 80 percent of U.S. citizens abroad and to release a draft report on the results in the summer of 1998. IRS officials believe that the information will be complete and reliable enough to identify any countries where additional compliance efforts appear to be warranted. In the Health Insurance Portability and Accountability Act of 1996, Congress required Treasury to study and report on issues related to the income tax compliance of U.S. citizens and resident aliens residing abroad. In its report, Treasury discussed the current law regarding the taxation of U.S. citizens and permanent residents residing abroad and the difficulty of administering tax code provisions affecting expatriates—those who have relinquished their U.S. citizenship. The report included information on IRS’ initiatives to improve compliance among U.S. taxpayers abroad and some factors currently limiting these efforts. It also discussed the extent to which the Department of State and the Immigration and Naturalization Service collect information that could help IRS determine and improve compliance. Treasury suggested that the revenue impact of nonfiling abroad may be limited by the foreign earned income and housing expense exclusions and foreign tax credits. While available exclusions and credits would tend to reduce the revenue impact of nonfiling abroad, we note that the impact would not necessarily be rendered insignificant. Some nonfilers lose eligibility for the exclusions, and the average tax liability of those who did file from abroad was about $6,700 in 1995, despite available exclusions and credits. Also, the IRS studies that Treasury cited as evidence of limited impact involved a small number of taxpayers and cannot be used to estimate the impact of nonfiling abroad because of serious data limitations, as noted in our 1993 report. IRS’ ongoing demographic study is highlighted as an initiative that will allow IRS to identify the countries where certain compliance improvement strategies may be warranted. We could not assess the effectiveness of this initiative because it was not complete at the time we performed our work. The Treasury study cited several factors beyond IRS’ control as inhibiting its efforts to improve compliance levels in the U.S. population abroad. These included limitations on information reported from foreign sources and IRS’ authority to enforce collection in foreign countries, factors which are also noted in our report. Our report also cites IRS’ limited use of passport data and potentially unclear filing instructions as factors related to nonfiling abroad that are within IRS’ control. The Treasury report discussed the factors that it believes limit the usefulness of passport data, including limitations in the mailing address as a means of identifying and locating applicants residing abroad, and the large number of records received without SSNs. The report also suggests that attempting to penalize applicants who do not provide SSNs could entail more administrative cost than is warranted and notes that most applicants who do not provide SSNs appear to be under 20 years old. By contrast, we have recommended that IRS explore certain ways of obtaining better information from passport applicants and attempt to enforce the information requirements of section 6039E. We note that it is not necessary for IRS to obtain an applicant’s SSN from another source—a high cost factor, according to IRS—because inquiries can be sent to the mailing address cited on the passport application. And the applicant’s date of birth, included in the data IRS receives, might allow IRS to focus its efforts on adult applicants. Finally, while most of the applicants we analyzed who did not provide SSNs were under age 20, a significant percentage were adults. In particular, 24 percent were at least 30 years old. And, the age distribution of the applicants we analyzed is not a reliable indicator of the age distribution among all applicants residing abroad because IRS’ information on applicants who reside abroad is incomplete, as noted above. Due to this limitation, our analysis excluded U.S. citizens who applied for their passports in the United States before moving abroad, but included passports issued to children who were born abroad to U.S. citizens. The Treasury report did not recommend any additional IRS actions to improve tax compliance abroad, beyond IRS’ ongoing demographic project and planned follow-up. Treasury noted that State Department data on U.S. citizens registered at U.S. consular posts may be of some usefulness to IRS, although the Privacy Act could restrict IRS from obtaining them. We have not recommended that IRS obtain registration data because State Department officials believe that many U.S. citizens residing abroad do not register, and those who do register may remain on file even after they have left the country. The report also noted that modifying U.S. laws that define when U.S. citizenship is lost for tax purposes—so that the loss does not occur until the individual notifies the State Department—could close an existing loophole. The loophole might allow some individuals to avoid U.S. taxes by claiming a retroactive loss of U.S. citizenship. The extent and impact of nonfiling abroad remain largely unknown, due to uncertainties in the data we identified on the U.S. population abroad and returns filed from abroad. However, some evidence suggests that nonfiling may be relatively prevalent in some segments of the U.S. population abroad. And the revenue impact, while unknown, could be significant even though it would be reduced by available exclusions and credits. IRS’ ability to identify and collect taxes from nonfilers residing abroad is restricted by the limited reach of U.S. law in foreign countries, particularly U.S. laws on tax withholding, information reporting, and IRS’ authority to collect taxes through liens, levies, and seizures. However, IRS has not fully explored the usefulness of passport application data as a means of identifying potential nonfilers abroad and gauging the extent of the problem. Also, some of IRS’ filing instructions may confuse some taxpayers and cause them to erroneously believe they are not required to file. The usefulness of passport data in identifying nonfilers abroad has been limited because IRS has not (1) enforced the requirement for applicants to provide their SSNs and other information and (2) obtained data on the applicant’s occupation or, in some cases, country of residence. While passport applications contain no income information, the occupation and age data could help identify individuals residing abroad who are more likely to have income above the filing thresholds, provided IRS could reliably distinguish applicants residing in foreign countries from those who are merely tourists. The cost of obtaining additional data elements on occupation and country of residence would be offset to some degree by savings from the reduced volume of data processed if IRS carries out its plan to restrict the data to applicants residing abroad and exclude tourists who now account for the bulk of the data IRS receives. IRS had difficulty enforcing the requirement for applicants to provide SSNs and could find it difficult to enforce requirements for additional information on the applicant’s occupation and country of residence. However, IRS said some of the difficulty in enforcing the SSN requirement, before abandoning such efforts, stemmed from its self-imposed constraint of not sending inquiries to applicants unless their SSN could be determined from other sources. Another factor that could contribute to nonfiling abroad is the ambiguity in IRS’ filing instructions for Forms 1040 and related guidance, such as Publication 17. The current language could be misinterpreted to mean that income qualifying for the foreign earned income or housing expense exclusions does not need to be considered in determining the filing requirement. IRS has undertaken an initiative—the Mideast Project—to improve filing compliance among U.S. citizens residing in one region abroad and is now attempting to identify other geographical areas where such efforts may be beneficial. As of December 1997, IRS had obtained foreign data from 10 countries, but these did not include the 7 countries where the State Department estimated that the largest U.S. populations reside. IRS officials expect to obtain data on about 80 percent of the U.S. population abroad and release a draft report on their results in the summer of 1998. IRS has not analyzed passport application data to help identify countries where nonfiling among U.S. citizens may be particularly severe, and missing SSNs currently limit the usefulness of the data for this purpose. While our review was under way, IRS began efforts to make greater use of passport data from individuals residing abroad and is exploring ways of reinstating a program to penalize applicants who do not provide their SSNs. In its May 4, 1998, report, Treasury suggested that the revenue impact of nonfiling abroad may be limited by the foreign earned income and housing expense exclusions and foreign tax credits. We note that, while the revenue impact is unknown, it is not necessarily rendered insignificant by available exclusions and credits. The report did not recommend any IRS actions for improving tax compliance abroad, but it noted that IRS’ ongoing demographic project may identify countries where additional compliance efforts are warranted. The report also discussed several factors limiting the usefulness of passport application data. To obtain better data on the filing compliance of the U.S. population residing abroad and to promote their understanding of their filing requirements, the Commissioner of Internal Revenue should ensure that assesses the usefulness of country of residence and occupation data, in addition to data IRS currently receives from passport applicants, as a means of identifying potential nonfilers abroad and supplementing IRS’ other sources of demographic data on U.S. citizens abroad. The assessment might include reviewing a limited random sample of currently available information. estimates the cost of obtaining the additional data routinely for passport applicants residing abroad, including those who apply in the United States. If the estimated costs appear to be justified, IRS should (1) prescribe that passport applicants provide the additional items and (2) routinely obtain and analyze the additional data elements. undertakes additional efforts to enforce the information requirements of IRC section 6039E, including the requirement for applicants to provide their SSNs. One potential effort would be to contact a random sample of adult applicants who did not provide an SSN, using the mailing address provided on their passport application. revises the instructions for Form 1040 and related guidance, such as Publication 17, to clarify that income that qualifies for foreign earned income exclusions must be considered in determining whether one’s gross income exceeds the filing threshold. We requested comments on a draft of this report from the Commissioner of Internal Revenue, the Secretary of the Treasury, and the Secretary of State, or their designated representatives. In an April 1, 1998, meeting, responsible Treasury and IRS officials, including IRS’ Deputy Assistant Commissioner (International), provided oral comments and suggested clarifications, which we have incorporated where appropriate. IRS indicated that it generally agreed with the draft report and two of its four recommendations—on estimating the cost of obtaining additional types of passport data and revising relevant filing instructions—but questioned the cost efficiency of implementing two of the recommendations. IRS interpreted our recommendation on assessing the usefulness of certain additional passport application data as implying that it pay for and routinely obtain the additional data before knowing if the associated costs are justified. We revised the recommendation to reflect that the assessment could be based on a sample of data currently available to IRS. IRS also interpreted our recommendation on attempting to enforce the information requirements of IRC section 6039E as implying that it launch a full-scale enforcement program without first testing the program’s cost and feasibility. We revised the recommendation to specify that the effort could be limited to a random sample of applicants who did not provide SSNs. We believe that such a test would constitute additional effort to enforce the requirements as suggested in our recommendation, provided that IRS evaluates the test and continues or modifies the approach as the results warrant. The State Department provided written comments dated April 6, 1998, that suggested clarifications and additional information, which we have incorporated in this report where appropriate. In particular, the State Department noted that providing the additional passport information suggested in our report would not prove burdensome, but the Department would be concerned if IRS sought to require passport applicants to answer extensive questions on their income and its sources. The State Department also commented that the draft seemed to imply that a statutory provision denying a passport to an applicant who failed to provide an SSN would be successfully challenged on constitutional grounds. Our intent was only to note that such a policy would raise a significant constitutional issue, and we modified the wording in this report to avoid any unintended implication as to how a legal challenge would be decided. As agreed with your staff, unless you announce its contents earlier, we plan no further distribution of this report until 30 days from the date of this letter. At that time, we will send copies to the Ranking Minority Member of the House Ways and Means Committee; the Chairman and Ranking Minority Member of the Subcommittee on Oversight, Committee on Ways and Means; various other congressional committees; the Secretary of the Treasury; the Commissioner of Internal Revenue; and other interested parties. We also will make copies available to others upon request. Please contact me at (202) 512-9110 if you or your staff have any questions. The major contributors to this report are listed in appendix IV. In general, the foreign earned income exclusion allows taxpayers meeting specific foreign residency requirements to exclude up to $70,000 of their earned income, as of tax year 1997. The excludable amount is to be increased incrementally to $80,000 by 2002 per modifications to Internal Revenue Code (IRC) section 911 enacted in 1997. Excludable income is generally limited to amounts earned for services performed abroad, including salaries and wages (except wages from the U.S. government), and does not include income derived from capital, such as interest, dividends, capital gains, or pension and IRA distributions. The foreign housing exclusion generally allows taxpayers meeting the residency requirements to exclude a portion of their housing expenses if they are employed abroad. Income qualifying for the foreign earned income exclusion is reduced by the amount of the housing exclusion. The foreign income tax credit is available to taxpayers who owe taxes to foreign governments on their foreign source income. To claim the credit, taxpayers must file a Form 1116, which provides for separate calculation of the credit amount for each of eight different income categories. Also, P.L. 104-191, enacted in August 1996, included modifications to the tax treatment of expatriates and a requirement for the Treasury Department to report within 90 days on the income tax compliance of U.S. taxpayers residing abroad. The legislative history indicates that the Treasury report was mandated because of past difficulties in determining when a U.S. citizen had committed an expatriating act with a tax avoidance purpose and thus must continue to pay U.S. taxes on their worldwide income. We obtained data on U.S. taxpayers residing abroad from the State Department and from foreign census or immigration reports collected by the U.N. Demographic Statistics Section; the International Programs Center Library of the U.S. Census Bureau; and Eurostat, a statistical organization of the European Union. We contacted officials at 21 U.S. consulates and embassies—those reporting more than 40,000 U.S. citizens in their jurisdictions—regarding the information used in developing the State Department’s estimates, and received written responses from 18 of the 21. We discussed the reliability of foreign government data with IRS and U.S. Census Bureau officials and cross-checked some of the data against estimates collected by Eurostat and against U.S. data on the number of Social Security beneficiaries and federal retirees residing in a given foreign country. We found that the reliability of both the State Department and foreign government estimates is uncertain, as discussed in our findings. We obtained IRS data on returns filed from abroad for tax year 1995. IRS classifies returns as international if filers cite a foreign mailing address, attach a Form 2555 claiming the foreign earned income or housing exclusions, or provide other indications of a foreign residence, such as by reporting their income in foreign currencies. We discussed the data’s reliability with IRS officials and found that its reliability is uncertain, for the reasons noted in our letter. We also analyzed data on the number of potential nonfilers identified abroad through IRS’ Information Matching Program in 1995 relative to the number of returns that IRS classified as being filed from abroad in 1995. We compared that proportion with the same proportion calculated for the general U.S. population in 1995. This approach was limited by the uncertainty of IRS’ data on returns filed from abroad and the lack of quantified IRS data on the number of potential nonfilers who were nonresident aliens. We included returns from nonresident aliens in the number of returns filed from abroad for 1995, even though IRS officials believe that nonresident aliens account for relatively few potential nonfiler cases identified through information matching. Excluding nonresident aliens in the returns filed data would have made the proportion for nonfilers abroad appear even larger relative to the proportion of nonfilers in the general U.S. population. We also attempted to assess the prevalence of nonfiling abroad by matching selected passport application records against IRS’ database of SSNs from filed tax returns. In particular, we asked IRS to extract foreign addressed passport records from all passport data it had retained on magnetic media—which included applications processed by the State Department and forwarded to IRS in the last half of 1995 and in 1996. We asked IRS to match the SSNs in these passport records against its database of SSNs from returns filed in tax years 1994 to 1996 to determine the proportion of applicants not filing tax returns each year, by age category. However, 44 percent of the application records did not include SSNs, and so they could not be readily matched. This rendered the results inconclusive, as noted in our findings, because the nonfiling rate found in the cases with SSNs cannot be projected to the missing SSN cases. Also, the match against tax year 1996 returns did not provide useful data because it did not include some unknown number of returns filed late under a 4-month filing extension available to U.S. individuals residing abroad. This appendix presents the detailed data related to our analysis of the passport application data IRS receives from the State Department. Table III.1 shows the percent of individuals not filing income tax returns among the passport applicants we analyzed who provided SSNs, compared with the percent not filing from the general U.S. population. Those not filing are not necessarily required to file—that is, those with gross income below the filing thresholds and, in some circumstances, children whose income exceeds the thresholds but is reported on their parents’ returns are not required to file. Table III.2 shows the age stratification of the general U.S. population compared with passport applicants with and without SSNs, as of 1995. Table III.2: Age Distribution of Passport Applicants Compared With General U.S. Population Age of passport applicants as of the end of 1996. Joseph Jozefczyk, Assistant Director, Tax Policy and Administration Issues Robert Floren, Evaluator-in-Charge Pamela Pavord, Evaluator Elizabeth W. Scullin, Communications Analyst Don Phillips, Computer Specialist Shirley Jones, Senior Attorney The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO provided information on the tax compliance of U.S. citizens residing in foreign countries, focusing on: (1) whether it is possible, given available data, to estimate the prevalence and revenue impact of nonfiling among U.S. citizens residing abroad; (2) factors that may limit the Internal Revenue Service's (IRS) enforcement of the filing requirement or otherwise contribute to nonfiling abroad; (3) IRS' recent initiatives to improve the filing compliance in this population; and (4) the Department of the Treasury's study on the income tax compliance of U.S. taxpayers residing abroad. GAO noted that: (1) IRS has not estimated the overall prevalence of nonfiling abroad or the resulting loss of tax revenue, and the data GAO identified in its review were inadequate to support reliable quantified estimates; (2) data on the number of U.S. taxpayers residing abroad and the number of returns they file are of uncertain reliability, and the amount of taxes that nonfilers would owe if they were to file is unknown; (3) one recent IRS initiative, however, focused on certain Mideast countries and identified enough nonfilers and additional tax revenue that IRS believes there may be benefits to looking for concentrations of nonfilers in other foreign countries; (4) GAO was able to identify several factors that may limit IRS' enforcement of the filing requirement or otherwise contribute to nonfiling abroad; (5) some of these factors are beyond IRS' control; (6) the income of U.S. citizens residing abroad is generally not subject to U.S. tax withholding or information reporting if it is derived from foreign employers or foreign financial investments; (7) IRS data show that tax withholding and information reporting by employers or other income providers resulted in much higher rates of tax compliance than when neither system is in place; (8) IRS generally cannot collect unpaid taxes from assets that have been transferred to a foreign country; (9) the enforcement actions that IRS uses in the United States have no legal standing in most foreign countries; (10) although IRS obtains passport data from the Department of State, it has made little use of these data; and in recent years, IRS has not attempted to penalize the large number of applicants who fail to furnish a social security number (SSN), as the law provides; (11) IRS has no systematic way of capturing a passport applicant's country of residence and occupation, which could provide demographic data on foreign concentrations of U.S. citizens and help IRS distinguish them from tourists; (12) the instructions for filing form 1040 are potentially misleading and may cause some taxpayers residing abroad to erroneously conclude that they have no obligation to file; (13) IRS' recent initiatives concerning nonfiling abroad include a special project in the Middle East that was initiated as a result of events related to the Desert Storm War and a data-gathering effort to identify other potential concentrations of nonfilers residing abroad; and (14) in fiscal year 1997, IRS began to gather foreign census and other demographic information on U.S. citizens residing abroad to identify other countries where similar compliance efforts may be beneficial.
Between January 1999 and May 2000, the U.N. Security Council adopted 10 resolutions authorizing new peacekeeping operations or significantly expanding existing ones, including 8 resolutions for operations in East Timor, Sierra Leone, and the Democratic Republic of the Congo. Table 1 lists the eight decisions and the dates respectively of the executive branch decisions to support the operations, the letters informing the Congress of these decisions, and the U.N. Security Council votes. The U.N. and multilateral operations in these three locations were undertaken to help resolve long-standing internal conflicts. The estimated cost of the ongoing U.N. operations in these locations represented over half of the $2.7 billion estimated cost of U.N peacekeeping operations in 2001. Although peace agreements or cease-fires had been reached or were imminent in these three locations, violence continued and the political accords appeared tenuous. The following paragraphs briefly describe the situations in these three locations to provide some context for the eight executive branch decisions. Appendix III provides additional information about key events related to these decisions. Since 1975, when Indonesia forcibly incorporated East Timor, the United States had supported some form of self-determination for the former Portuguese colony with a population of about 800,000 people. In 1983, Portugal and Indonesia began regular talks aimed at resolving East Timor’s status; in June 1998, Indonesia agreed to enter U.N.-mediated talks about autonomy for East Timor. In January 1999, Indonesia’s President announced his support for offering the people of East Timor a choice between autonomy within Indonesia or independence. On May 5, 1999, Indonesia and Portugal concluded a general agreement that, among other things, called for the establishment of a U.N. operation to conduct a free and fair vote for the people of East Timor to choose the territory’s future status—either autonomy within Indonesia or independence. Despite this agreement, pro-autonomy factions, supported by local militia and the Indonesian military, attempted to use violence to intimidate pro- independence factions and influence the outcome of the vote. There also was uncertainty about the Indonesian security forces’ willingness to allow a free and fair vote. The conflict in Sierra Leone began in 1991, when rebel forces (the Revolutionary United Front) began attacking government forces near the Liberian border. Sierra Leone’s army at first tried to defend the government with the support of military forces provided by the Economic Community of West African States, but the army itself overthrew the government in 1992. Despite the change of power, the rebel forces continued their attacks. The army relinquished power in 1996 after parliamentary and presidential elections. Rebel forces, however, did not participate in the elections and did not recognize the results. A November 1996 peace agreement between the government and the rebels (the Abidjan Accord) was derailed by another military coup d’état in May 1997. This time the army joined forces with the rebels to form a ruling junta and the elected government was forced into exile in Guinea. In February 1998, the West African military forces launched an attack that led to the collapse of the junta and the restoration of the elected government. In July 1998, the U.N. Security Council established the U.N. Observer Mission in Sierra Leone to monitor the situation and help the combatants reach an overall peace agreement. In July 1999, the combatants signed the Lomé Peace Agreement, under which U.N. and West African peacekeeping forces would share in helping to provide security and disarm, demobilize, and reintegrate the combatants. During the 8 years of fighting, an estimated 500,000 Sierra Leone citizens were forced to flee to neighboring Guinea, Liberia, Gambia, and other locations. Of the estimated 6 million people remaining in Sierra Leone, 2.6 million could not be reached by humanitarian agencies and 370,000 were internally displaced. These populations suffered severe human rights abuses, including mutilations, amputations, summary executions, torture, and sexual abuse. The Congo conflict grew out of the instability that followed the Rwandan crisis of 1994 and eventually involved the armed forces of the Democratic Republic of the Congo and five regional states, several Congolese rebel groups, and groups responsible for the Rwandan genocide. According to a U.N. report, this conflict was “characterized by appalling, widespread and systematic human rights violations, including mass killings, ethnic cleansing, rape and destruction of property” and its effects had “spread beyond the subregion to afflict the continent of Africa as a whole.” In August 1998, the Southern Africa Development Community and the Organization for African Unity announced the start of a regional initiative to negotiate an end to the Congo conflict. On July 10, 1999, six states signed the Lusaka Cease-fire Agreement and 5 days later, on July 15, the U.N. Secretary General proposed establishing a U.N. operation to help monitor implementation of the cease-fire agreement. Directive 25 stated that U.S. and U.N. involvement in peacekeeping must be both selective and effective. This principle was underscored by the 1996 U.S. National Security Strategy Report, which stated that “the United States must make highly disciplined choices about when and under what circumstances to support” peacekeeping operations and directed officials to “undertake a rigorous assessment of requirements before voting to support operations.” To this end, Directive 25 required executive branch decision-makers to consider specific factors in deciding whether to support a proposed operation. These factors included questions about a proposed operation’s (1) political context, such as whether it advanced U.S. interests and the consequences of inaction were judged unacceptable, and (2) feasibility, such as whether it had appropriate forces, financing, and mandate to accomplish its mission and its anticipated duration was tied to clear objectives and realistic exit criteria. Directive 25 established these factors to help executive branch officials identify proposed operations’ basic political, military, and resource shortfalls but did not require that all or any particular factors be present in a proposed operation before it was approved. The directive stated that decisions would be based on the cumulative weight of the factors, with no single factor being an absolute determinant. However, the directive also stated that the United States generally would support only well-defined peace operations linked to concrete political solutions. Executive branch officials extensively considered all Directive 25 factors before deciding to support the authorization or expansion of the U.N. operations. Executive branch assessments of proposed operations identified concerns about some directive factors and shortfalls in others. Executive branch officials decided to support the operations because most factors were present and, in their judgment, U.S. interests were advanced by supporting regional allies, creating or maintaining regional stability, or addressing humanitarian disasters. Following interagency deliberations, senior executive branch officials directed State and Defense officials to strengthen the proposed operations before the U.N. Security Council voted or to develop plans to address the risks that the shortfalls posed. For the eight decisions, we found that the executive branch used a systematic process that resulted in a full consideration of all Directive 25 factors. The process for making these decisions involved the consideration of Directive 25 at the following three levels: Individual agencies. The State and Defense Departments and the National Security Council were the primary agencies that assessed the proposed operations. Individual agency deliberations included relevant regional, functional, legal, and legislative affairs experts. Peacekeeping Core Group. This interagency working group, chaired by the National Security Council’s Senior Director for Multilateral and Humanitarian Affairs, was comprised of assistant and deputy assistant secretaries of State, Defense, and other U.S. departments and agencies. The core group brought together the individual agency assessments and developed consensus recommendations for senior decision-makers for each of the eight decisions. Deputies Committee. This interagency decision-making group, chaired by the Deputy Adviser to the President for National Security Affairs or his designee, was typically comprised of the undersecretaries of State and Defense and similar officials from other agencies. For these eight decisions, the Deputies Committee made the final decision to vote for the proposed operation. Interactions between these three levels were iterative and supported by extensive intelligence reporting. Figure 1 illustrates the process used to make these eight decisions. For the eight decisions we reviewed, we found that executive officials prepared and reviewed hundreds of records considering all applicable Directive 25 factors before deciding to support the proposed operations. These records included decision memorandums, situation assessments, concept papers, and summaries of interagency discussions. For five of the eight decisions, the State Department prepared comprehensive Directive 25 analyses that candidly assessed the proposed operations, including identifying basic political, military, and resource shortfalls. Analysis of these records showed that executive branch officials considered all applicable Directive 25 factors before making their decisions. Before the late May 1999 decision to support the U.N. Mission in East Timor, for example, executive branch officials prepared 19 assessments of the proposed operation, including a comprehensive Directive 25 analysis. These assessments considered all applicable Directive 25 factors, for example, whether (1) there was support among U.N. member states for U.N. action in Indonesia and (2) the parties consented to the deployment of a U.N. force. Before the August 1999 decision to support the expansion of the U.N. Observer Mission in Sierra Leone, executive branch officials prepared 16 assessments of the proposed operation, including a comprehensive Directive 25 analysis. These assessments considered all applicable Directive 25 factors, for example, whether the expanded operation had adequate financing and forces to carry out its mission. In making all eight decisions, executive branch officials also considered assessments provided by other governments, the U.N. Secretariat, diplomatic envoys and negotiators, regional organizations, and others operating in the areas of concern. For the eight decisions, the Peacekeeping Core Group met several times specifically to consider applicable Directive 25 factors for the proposed operations and develop options and recommendations for senior decision- makers. The Deputies Committee met less frequently to consider and act on the options and recommendations developed by the core group. For example, our analysis of executive branch records showed that the core group met nine times between March and late May 1999 specifically to discuss the proposed U.N. Mission in East Timor. During this same period, the Deputies Committee met three times to consider and act on the core group’s recommendations. Similarly, between February and early August 1999, our analysis showed that the core group met eight times to discuss the proposed expansion of the U.N. Observer Mission in Sierra Leone. During this same period, the Deputies Committee met twice to consider and act on the core group’s recommendations. According to executive branch officials, these meetings were supplemented by frequent informal contacts between members of the core group and Deputies Committee. For example, core group members participated in weekly conference calls. At the time the eight decisions were made, executive branch assessments indicated that the proposed operations advanced U.S. interests. In defining U.S. interests, executive branch officials used the definitions in the annual U.S. national security strategy reports. These reports defined U.S. interests as (1) vital—those interests that affect the safety and survival of the United States; (2) important—those interests that affect U.S. national well-being, including commitments to allies; and (3) humanitarian and other—those interests related to U.S. values. Executive branch officials judged that the proposed operations in East Timor, Sierra Leone, and the Democratic Republic of the Congo advanced important and humanitarian and other U.S. interests. For all operations, the consequences of inaction also were judged unacceptable. Other than the definitions in the annual national security strategy reports, we could find no criteria to guide executive branch officials in making judgments about these two Directive 25 factors. At the time the eight decisions were made, executive branch assessments identified at least one Directive 25 shortfall in all of the proposed operations and several shortfalls in six of them. Most of these shortfalls were related to the proposed operations’ operational feasibility, such as whether they had adequate means for carrying out their missions and their duration was tied to clear objectives and realistic exit criteria. Executive branch assessments also identified concerns about some factors. On the basis of our analysis of executive branch records, figure 2 summarizes executive branch assessments of the Directive 25 factors for the proposed operations at the time of the eight Deputies Committee decisions. The following sections briefly describe the Directive 25 shortfalls identified in executive branch assessments of the proposed operations. As shown in figure 2, executive branch assessments of the proposed U.N. Mission in East Timor identified four Directive 25 shortfalls. First, assessments questioned whether the preconditions for a peacekeeping operation (a cease-fire in place and the parties consent to the deployment of a U.N. force) existed in East Timor. Violence against pro-independence factions continued and, despite the Indonesian government’s announced consent to the operation, Indonesian security forces appeared to be supporting this violence. Second, assessments questioned whether, in the face of this continuing violence, there was a clear understanding of where the proposed operation would fit between peacekeeping and peace enforcement. Third, assessments questioned whether the proposed operation’s mandate was appropriate. Despite concern about violence, the proposed operation did not include peacekeeping troops primarily because Indonesia objected to the deployment of such forces. Additionally, the role and objectives of the civilian police component were unclear given the scope of the violence. Fourth, assessments questioned whether the proposed operation’s exit criteria were realistic because there was a gap of several months between the end of the operation and a proposed follow-on U.N. operation. On May 27, 1999, the Deputies Committee decided the United States would vote in the U.N. Security Council to authorize the proposed peacekeeping operation. Factors considered in this decision included U.S. interests in aiding Australia and ending the violence in East Timor, regional support for U.N. action, and the judgment that U.N. action was East Timor’s best opportunity for democratic development. Executive branch assessments of the proposed International Force in East Timor identified one Directive 25 shortfall. As shown in figure 2, assessments questioned whether the operation’s duration was linked to realistic criteria for ending the operation. The operation’s general exit strategy was to restore peace and security to East Timor and then transfer responsibility for maintaining peace and security to the proposed U.N. Transitional Administration in East Timor. However, at the time executive branch officials made their decision, the specific timing and criteria for this transfer were uncertain. The Deputies Committee decided that the United States would vote in the U.N. Security Council to authorize the proposed multilateral peace enforcement operation. As before, factors considered in this decision included U.S. interests in aiding Australia and ending the violence in East Timor. Led by Australia, the multinational force began deploying in East Timor on September 20, 1999. Executive branch assessments of the proposed U.N. Transitional Administration in East Timor identified one Directive 25 shortfall. As shown in figure 2, assessments questioned whether the proposed operation had adequate means—specifically, forces and financing—to carry out its extensive nation-building tasks. In particular, assessments questioned whether the United Nations could recruit sufficient troops and international civilians to staff the operation. Although not identifying a clear shortfall in international support for U.N. action in East Timor, several assessments noted some members states’ concerns about whether the proposed operation would violate Indonesia’s sovereignty. On October 8, 1999, the Deputies Committee decided that the United States would vote for the proposed peace enforcement and nation-building operation. Factors considered in this decision included U.S. interests in aiding important regional allies and the judgment that a U.N. operation was the best choice for administering East Timor during its transition to independence. As shown in figure 2, executive branch assessments of the proposed expansion of the U.N. Observer Mission in Sierra Leone identified three shortfalls. First, assessments questioned whether the preconditions for a peacekeeping operation existed in Sierra Leone. Fighting continued in some areas of the country, and there was concern about whether the rebels and Liberia truly consented to the deployment of an expanded U.N. force. Second, assessments questioned whether the proposed operation had adequate means to carry out its mission in the face of potential rebel resistance. Third, assessments questioned whether the proposed operation’s duration was linked to realistic criteria for ending it. Concerns included whether the proposed milestones for completing some tasks were realistic and whether rebel forces would disarm and relinquish control of diamond-producing areas, as called for in the Lomé Peace Agreement. On August 5, 1999, the Deputies Committee decided to support the proposed expansion of the peacekeeping operation. Factors considered in this decision included U.S. interests in resolving the conflict in Sierra Leone, maintaining regional stability, and ending the violence against innocent civilians. Executive branch assessments of the proposed U.N. Mission in Sierra Leone identified four shortfalls, as shown in figure 2. First, assessments again questioned whether the preconditions for a peacekeeping operation existed in Sierra Leone. Fighting continued in some areas of the country, and there was continuing concern about whether the rebels and Liberia truly consented to the deployment of an expanded U.N. force. Second, assessments questioned whether, in the face of continuing violence, there was a clear understanding of where the proposed operation would fit between peacekeeping and peace enforcement. Third, assessments questioned whether the proposed operation had adequate means to carry out its mission—identifying shortfalls in its forces, financing, and mandate. One concern was whether some proposed troop contingents had adequate training and equipment to deal effectively with rebel resistance. Fourth, assessments questioned whether the proposed operation’s duration was linked to clear objectives and realistic criteria for ending it. One concern was whether rebel forces would disarm and relinquish control of diamond- producing areas. On October 8, 1999, the Deputies Committee decided that the United States would vote to authorize this new peacekeeping operation. Factors considered in this decision included the unacceptable humanitarian consequences of inaction, particularly continued human rights abuses by rebel forces, and support for U.N. action by U.N. Security Council members and important regional states, including Nigeria, Guinea, and Ghana. As shown in figure 2, executive branch assessments of the proposed expansion of the U.N. Mission in Sierra Leone identified three shortfalls. First, assessments questioned whether there was a clear understanding of where the proposed operation would fit between peacekeeping and peace enforcement. One concern was whether a peace enforcement operation could maintain the neutrality and consent needed to carry out some peacekeeping tasks. Second, assessments questioned whether the proposed operation had adequate means to carry out its mission, expressing concern about whether its forces, financing, and mandate were appropriate. One concern was whether some proposed troop contingents—which were poorly trained and equipped—could effectively carry out peace enforcement tasks. Third, assessments questioned whether the proposed operation’s duration was linked to clear objectives and realistic criteria for ending it. A continuing concern was whether the rebels would disarm and relinquish control of diamond-producing areas. On January 24, 2000, the Deputies Committee decided that the United States would vote to expand the U.N. Mission in Sierra Leone and authorize it to use force to accomplish some tasks. Factors considered in this decision included U.S. interests in preventing this conflict from spreading to neighboring states, the unacceptable humanitarian consequences of inaction, and international support for U.N. action. Executive branch assessments of the proposed U.N. Organization Mission in the Democratic Republic of the Congo identified four shortfalls, as shown in figure 2. First, assessments questioned whether the preconditions for a peacekeeping operation existed in the Democratic Republic of the Congo. Fighting continued in some areas of the country, and it was uncertain whether the warring parties consented to the deployment of a U.N. force. Second, assessments questioned whether, in the face of continuing violence, there was a clear understanding of where the proposed operation fit between peacekeeping and peace enforcement. Third, assessments questioned whether the proposed operation had adequate means—appropriate forces, financing, and mandate—to carry out its mission. Concerns included whether U.N. forces would have adequate protection and could move about the vast country effectively. Fourth, assessments questioned whether the proposed operation’s duration was linked to clear objectives and realistic criteria for ending it. One concern was the potential for the United Nations to become more deeply involved in the conflict. In recognition of such shortfalls, the United States rejected proposals to deploy a large (up to 30,000 troops) U.N. peacekeeping force in the Democratic Republic of the Congo. Instead, the Deputies Committee decided on July 23, 1999, that the United States would vote to support a small monitoring operation. Factors considered in this decision included U.S. interests in resolving the conflict in the Democratic Republic of the Congo, which involved several regional states; maintaining regional stability; and preventing the resurgence of genocide and mass killings in Central Africa. As shown in figure 2, executive branch assessments of the proposed expansion (phase II) of the U.N. Organization Mission in the Democratic Republic of the Congo identified three Directive 25 shortfalls. These assessments reflected the same basic concerns identified in executive branch assessments of the initial operation (previously described). Again, in recognition of such shortfalls, the United States rejected proposals to deploy a large U.N. peacekeeping force. Instead, the Deputies Committee decided on January 24, 2000, that the United States would vote to support a proposed peacekeeping operation that would deploy up to 5,537 troops (including up to 500 observers) in phases. Under the proposal, these phased deployments were tied to the attainment of specific objectives related to the shortfalls, such as the parties establishing a durable cease- fire. As before, factors considered in the decision included U.S. interests in resolving the conflict, restoring regional stability, and humanitarian concerns. As part of the process of making the eight decisions, executive branch officials attempted to improve the operations’ chances of success by shaping their mandates and forces to eliminate identified shortfalls. For example, concerned that objectives for the U.N. Mission in Sierra Leone were unclear, the Peacekeeping Core Group directed officials at the U.S. Mission to the United Nations to work with other U.N. member states and U.N. officials to link the objectives more directly to helping the government and the rebels implement the Lomé Peace Agreement. This was accomplished before the Deputies Committee decided to support the operation and allowed executive branch officials to change their assessment of this Directive 25 factor to reflect that the operation had clear objectives (see fig. 2). Additionally, concerned that the presence of regional peacekeeping forces was vital to the success to this operation, the Deputies Committee and Peacekeeping Core Group directed State and Defense officials to develop options for providing financial and logistical support to encourage the continued engagement of regional forces. In other cases, for example, the Democratic Republic of the Congo, executive branch officials “helped shape the scope and scale of the U.N. mission…to ensure achievable objectives…and avoid overextending the and sending in peacekeepers before the conflict was ripe for resolution or a political settlement was still in the making.” In all eight decisions where Directive 25 shortfalls could not be addressed adequately before the U.N. Security Council voted, executive branch officials worked to mitigate the risks associated with these weaknesses by reducing the shortfalls’ impact on the operations. For example, concerned about the capability of forces serving in the U.N. Mission in Sierra Leone, the Deputies Committee and the Peacekeeping Core Group directed U.S. officials to (1) contact U.N. members and officials to seek more capable forces and (2) develop options for providing logistical support for some troop contingents. Furthermore, concerned about whether the cease-fire would hold in the Democratic Republic of the Congo, the Deputies Committee directed U.S. officials to monitor compliance closely and apply diplomatic pressure to the warring parties to observe the cease-fire agreement. Our analysis of executive branch records identified similar attempts to address other Directive 25 shortfalls for the eight decisions we reviewed. Appendix IV provides information about some of the actions taken by the executive branch to address Directive 25 shortfalls. The executive branch provided a substantial amount of information to the Congress about the proposed operations in consultations before or just after the decisions to support them. This information described how the proposed operations advanced U.S. interests, the conflicts that the proposed operations were intended to address, and other related considerations. Executive branch consultations about the two decisions regarding proposed operations in the Democratic Republic of the Congo also described Directive 25 shortfalls, which helped build support in the Congress for the decisions to vote for deploying these operations. However, for the other six decisions we found little or no evidence that executive branch officials informed the Congress about the proposed operations’ Directive 25 shortfalls either in consultations with the Congress before the executive branch decided on the operations or in the information provided to the Congress in writing just after the decisions were made. Additionally, aside from the shortfall issue, executive branch officials had considerable detailed information about the proposed operations well in advance of the time they provided this information to the Congress. Our analysis of executive branch records and transcripts of monthly peacekeeping briefings for the Senate, supplemented by our observation of similar briefings for the House, showed that the executive branch began providing information to the Congress about the proposed operations in East Timor, Sierra Leone, and the Democratic Republic of the Congo as long as 4 to 6 months before the eight decisions. At monthly peacekeeping briefings, executive officials provided information about the status of ongoing U.N. operations and proposals for new or expanded operations. At these briefings, executive branch officials provided copies of key U.N. Secretary General reports, the U.N. Security Council’s upcoming calendar and work program, and monthly reports of peacekeeping finances and troop contributions. Additionally, senior executive branch officials briefed Members of Congress and their staffs about the U.N. Secretary General’s proposals for proposed peacekeeping operations and related topics. For example, in February 2000, senior officials provided a special briefing to the Chairman of the Senate Committee on Foreign Relations about the conflict in the Democratic Republic of the Congo. The briefing included detailed information about the factions in the Democratic Republic of the Congo and the role of neighboring states in the conflict, such as Rwanda, Uganda, and Zimbabwe. Administration officials also testified several times before the Congress about the operations and had separate telephone discussions and other meetings as noted in their log of congressional contacts. For seven of the eight decisions we reviewed, the executive branch informed the Congress in writing of its decision to support the proposed operation within a few days of the Deputies Committee’s decision. These letters were dated at least 15 days before the U.N. Security Council voted on the matter and were transmitted to the Congress for the purpose of meeting one of the peacekeeping reporting requirements in the U.N. Participation Act. The information required to be provided for each proposed operation includes the “anticipated duration, mandate, and command and control arrangements…the planned exit strategy, and the vital national interests to be served.” These letters provided the Congress with the most comprehensive and detailed information it received about the proposed operations. As discussed in the following section, executive branch consultations— such as briefings and reports—provided the Congress with substantial information about the U.S. interests in the proposed operations and details about their mandate, cost, and exit strategy. However, these consultations provided limited information about Directive 25 shortfalls. Figure 3 shows the typical timing and content of consultations with the Congress about the seven decisions. Although neither Directive 25 nor the U.N. Participation Act required that the executive branch consult with the Congress about operational shortfalls in the proposed operations, executive branch officials recognize that “U.S. policy-makers’ views on the shortfalls, challenges and risks associated with successfully undertaking an operation” should be addressed comprehensively during consultation discussions with the Congress. Our analysis of executive branch records and transcripts of monthly peacekeeping briefings for the Senate, supplemented by our observation of similar briefings for the House, showed that the executive branch provided the Congress with substantial information about the U.S. interests in all of the proposed operations and general information about their mandates, cost, and exit strategies. However, we found no evidence that the Congress was informed about most shortfalls identified in executive branch assessments of the proposed operations for East Timor and Sierra Leone. As previously discussed, these shortfalls included judgments that the proposed operations lacked adequate means to carry out their missions or their duration was not linked to realistic exit criteria. In contrast, our analysis showed that the Congress was informed about most shortfalls identified in executive branch assessments of the proposed operations in the Democratic Republic of the Congo. According to congressional staff, this information provided the Congress with an opportunity to develop a more informed opinion about the proposed operations and better convey to policy-makers its views about them. The following examples illustrate our findings about the content of executive branch consultations. Prior to the May 1999 decision to support the U.N. Mission in East Timor, executive branch assessments identified four Directive 25 shortfalls in the proposed operation. For example, 13 assessments questioned whether the operation’s mandate was appropriate, in part because the role and objectives of the civilian police component were unclear. Similarly, five assessments questioned whether the operation’s duration was tied to realistic exit criteria. In the months before the May decision, executive branch officials briefed the Congress at least 10 times about peacekeeping issues. Our analysis of executive branch and congressional records showed that those briefings provided substantial information about (1) how the proposed operation would advance the United States’ substantial security, political, and commercial interests in Indonesia; (2) the threat to international peace and security posed by the violent attacks in civilians; and (3) the necessity of U.N. action to ensure a free and fair vote. Additionally, these briefings provided information about one shortfall— concerns about whether the preconditions for a peacekeeping operation existed in East Timor. However, these briefings did not provide information about the other three shortfalls identified in executive branch assessments. Moreover, our analysis showed that these three shortfalls were not cited in the reports and other written material provided to the Congress. Prior to the August 1999 decision to support the expansion of the U.N. Observer Mission in Sierra Leone, executive branch assessments identified three shortfalls in the proposed operation. For example, six assessments questioned whether the rebels truly consented to the deployment of an expanded U.N. force and the proposed operation had adequate means to carry out its mission in the face of potential rebel resistance. In the months before the August decision, executive branch officials briefed the Congress at least 16 times about peacekeeping issues. Our analysis of executive branch and congressional records showed that six of those briefings provided substantial information about the threat to international peace and security posed by the humanitarian crisis and the danger of the conflict spreading to neighboring countries. These briefings also provided information about how the proposed operation would advance U.S. interests in supporting the West African peacekeeping force in providing regional security. Additionally, these briefings provided information about one shortfall—concerns about whether the preconditions for a peacekeeping operation existed in Sierra Leone because of uncertain rebel consent. However, these briefings did not provide information about the other two shortfalls; moreover, these two shortfalls were not cited in the reports and other written material provided to the Congress. Prior to the February 2000 decision to support the expansion of the U.N. Organization Mission in the Democratic Republic of the Congo, executive branch assessments identified three shortfalls in the proposed operation. For example, six assessments questioned whether the operation had adequate means—appropriate forces, financing, and mandate—to accomplish its mission and its duration was tied to realistic exit criteria. In the months before the February decision, executive branch officials briefed the Congress at least 12 times about peacekeeping issues. Our analysis of executive branch and congressional records showed that, in contrast to the previous two examples, those briefings provided substantial information about all three shortfalls. Moreover, our analysis showed that these three shortfalls were cited in the reports and other written material provided to the Congress. The February 7 letter informing the Congress of the decision to support the proposed operation, for example, clearly cited executive branch concerns that the warring parties were not observing the cease-fire and that the U.N. force would have to provide for its own security and protection in many areas because the parties lacked the capability. According to congressional staff, this information helped the Congress develop an informed opinion about the risks associated with this operation and reflected similar information provided in briefings and other consultations that occurred before the notification. Figure 4 summarizes our analysis of the information the executive branch provided to the Congress about the Directive 25 shortfalls that existed at the time the Deputies Committee decided the United States would vote for the operations. In each case in which the figure identifies a lack of consultation about a shortfall, our analysis of executive branch records showed that assessments consistently had identified a shortfall in this factor before the decision to support the operation in the U.N. Security Council. Our analysis also showed that executive branch assessments had identified shortfalls in other factors, but figure 4 does not include these shortfalls because assessments of these factors changed during the decision-making process. Senior executive branch officials told us that they did not consult with the Congress about some Directive 25 shortfalls because (1) the administration had not reached a consensus on whether they were actual shortfalls and (2) it had not decided whether to support the operations. Additionally, executive branch officials stated that congressional committees, members, and staff had ample opportunity to ask questions about the shortfalls but did not pose specific questions to the executive branch about Directive 25 weaknesses. Moreover, according to one executive branch official, the administration provided considerable negative information about the operations, but it was up to the Congress to reach its own conclusion. Finally, executive branch officials said that it could be more forthcoming in briefing the Congress if the briefings were held in secure settings. According to these officials, the information about shortfalls was sensitive, and many of the briefings were held in relatively open forums. If the information were to become publicly known, it could be used to undermine U.S. strategy and U.N. operations. Despite these issues, executive branch officials said that the concerns expressed by the Congress during the consultations were integrated into the executive branch’s decision-making deliberations. Our review of executive branch records showed that officials did consider anticipated congressional reactions during the decision-making process. For example, the executive branch often internally discussed the reaction of congressional Members and staff to the costs and availability of troops to support the operations, particularly with the proposed expansion of the operations in Sierra Leone and the Democratic Republic of the Congo. The information provided to the Congress in writing by the executive branch for the purpose of meeting the consultation requirements established by the U.N. Participation Act provided the Congress with the most comprehensive and detailed information it received about the proposed peacekeeping operations. The executive branch provided this information at about the same time that the U.N. Secretary General first made recommendations to the U.N. Security Council about the composition and mandate of the proposed operations. However, U.S. officials knew many details about the likely shape of the operations well before this time, because they had been working with other U.N. members and U.N. officials to develop and refine them. Although neither Directive 25 nor the U.N. Participation Act required the executive branch to provide such information sooner than it did, earlier disclosure of this information would have provided the Congress with more time to assess and develop an informed opinion about the proposed operations. The following two examples involving East Timor illustrate this issue. The executive branch informed the Congress of its intent to vote for the U.N. Mission in East Timor in a letter dated May 27, 1999—just over 2 weeks before the U.N. Security Council authorized the operation. This letter provided the Congress with the most complete information it had received to date about the proposed operation’s purpose, composition, mandate, financing, exit strategy, and relationship to U.S. national interests. The letter also informed the Congress for the first time that the executive branch anticipated a U.N. operation to administer the transition to independence if the people of East Timor rejected autonomy and Indonesia ended the territory’s annexation. However, executive branch officials had been working since early April 1999 to develop a conceptual framework for a series of operations in East Timor. On April 8, for example, executive branch officials had completed a paper outlining a conceptual framework for three potential operations in East Timor. This paper proposed three sequential operations—one to organize and conduct a free and fair vote to determine East Timor’s future status, one to stabilize East Timor following the vote, and one to organize and direct its transition to autonomy or independence. This paper noted that the stabilization mission might require a multinational force and that the transition mission would involve development assistance and the creation of governmental and economic institutions. Additionally, on May 7, executive branch officials completed a detailed Directive 25 analysis of the proposed U.N. Mission in East Timor. During April and May 1999, executive branch officials briefed congressional staff four times about U.N. peacekeeping issues, but our analysis showed that they did not provide details about the proposed East Timor operations at these briefings. The executive branch informed the Congress of its intent to vote for the U.N. Transitional Administration in East Timor in a letter dated October 8, 1999—about 2-½ weeks before the U.N. Security Council authorized the operation. As before, this letter provided the Congress with the most complete information it had received to date about the proposed operation’s purpose, composition, mandate, financing, exit strategy, and relationship to U.S. national interests. However, in August 1999, the executive branch had completed a paper that (1) described in detail many components of the proposed operation (as one of several possible contingencies) and (2) directed U.S. officials to work with U.N. and other officials in developing more detailed plans for these components. By early September 1999, the executive branch had completed a full concept of operations for this operation. During August and September, executive branch officials briefed congressional staff several times about U.N. peacekeeping issues, but our analysis showed that they did not provide details about the proposed East Timor operation at these briefings. Executive branch officials told us that, although they provided considerable information to the Congress about potential or proposed peacekeeping operations in East Timor and other locations, they did not provide some detailed information sooner because it was related to (1) their routine, ongoing work with U.N. and other officials and did not represent a unified executive branch position and (2) the internal deliberative process of the executive branch. For the cases we examined, the driving factors in the decisions to support operations in East Timor, Sierra Leone, and the Democratic Republic of the Congo were the executive branch judgments that the operations advanced U.S. interests and that the consequences of inaction were unacceptable. Directive 25 served as a framework for identifying shortfalls and tasks to be undertaken to strengthen the proposed operations. Consequently, the decisions we examined clearly demonstrated a trade- off—proceed with operations judged to advance U.S. interests but accept the risk of failure inherent in operations having Directive 25 shortfalls. Consultation with the Congress did occur, but information about the full range of executive branch officials’ views on the benefits, challenges, and risks associated with supporting the operations in East Timor and Sierra Leone was not provided to the Congress so that it could develop a fully informed opinion and make decisions about appropriating funds for the operations. In contrast, more complete information about the benefits, risks, and challenges associated with supporting the operations in the Democratic Republic of the Congo was provided to the Congress. This positive model of consultation helped in developing congressional support for the executive branch’s decisions on these operations and was consistent with the expectations of Directive 25 and the spirit of the U.N. Participation Act. To improve executive branch consultations with the Congress, we recommend that the Secretary of State and other appropriate officials provide the Congress with timely, detailed, and complete information about Directive 25 shortfalls for all proposed new or substantially revised peacekeeping operations and the plans to mitigate the shortfalls. The timing of providing such information to the Congress is a matter of judgment; however, at a minimum, this information should be provided no later than at the time the Congress is informed in writing about the decisions to support such operations. Although Presidential Decision Directive 25 was issued by the Clinton administration, the Bush administration continues to use this guidance and is required by law to consult with the Congress about peacekeeping decisions. Accordingly, we obtained comments from the current administration (the National Security Council and the Departments of State and Defense) regarding its evaluation of this report and our recommendation on consultation. The National Security Council and the State Department provided written comments on this report. Their comments are reprinted in appendixes V and VI. The Defense Department elected not to provide written comments, but a Defense official told us that the Department concurred with State’s written comments. The Departments of State and Defense also provided technical comments, which we incorporated into this report as appropriate. The State Department did not characterize its views on this report. However, in reference to our recommendation, State said that it intended to continue to provide the Congress with timely, detailed and complete information about all new or substantially revised U.N. peacekeeping operations, including known potential and actual problem areas. Noting that the timing of the provision of this information is a matter of judgment, State said that it planned to continue to provide this information in a timely way, no later than the time that the Congress is informed in writing about decisions to support such operations. The National Security Council said that it appreciated the opportunity to review our report, had taken note of its findings, but did not have any comments on the report. The Acting Senior Director for Democracy, Human Rights, and International Operations wrote that the Council understood the importance of consulting with the Congress on peacekeeping missions and looked forward to working closely with the Congress on these and other important national security issues. As arranged with your office, we plan no further distribution of this report until 30 days from the date of the report unless you publicly announce its contents earlier. At that time, we will send copies to interested congressional committees and to the Assistant to the President for National Security Affairs; the Secretary of State; and the Secretary of Defense. Copies will also be made available to other interested parties upon request. If you have any questions about this report, please contact me at (202) 512-4128. Other GAO contacts and staff acknowledgments are listed in appendix VII. Since the end of the Cold War, U.N. and other multilateral peacekeeping operations have been an important component of U.S. foreign policy. For the eight decisions we reviewed, annual U.S. national security strategy reports and several Presidential Decision Directives provided guidance to executive branch officials making decisions about U.S. support for these operations, managing these operations once authorized, and consulting with the Congress about these matters. Additionally, the Congress in recent years has enacted peacekeeping notification and reporting requirements to enhance its ability to play a more effective role on these matters. Several U.S. policies established the basic framework for executive branch decision-making about U.S. support for U.N. or other multilateral peacekeeping operations. Annual U.S. national security strategy reports defined U.S. national interests. Several Presidential Decision Directives established the basic framework for U.S. national security decision- making and provide specific guidance to executive branch officials for making decisions about U.S. support for peacekeeping operations and managing these operations once authorized. Annual U.S. national security strategy reports recognize that, since there are always many demands for U.S. action, U.S. national interests must be clear. Toward this end, these reports established a three-level basic hierarchy of U.S. interests to guide executive branch decisions about national security matters, including peacekeeping. Table 2 describes these interests. In addition to defining U.S. national interests, the 1996 U.S. National Security Strategy Report recognized that, to maximize the benefits (to U.S. interests) of U.N. peace operations, the United States must make highly disciplined choices about when and under what circumstances to support or participate in these operations. Presidential Decision Directive 2 (Organization of the National Security Council), issued in March 1993, established the basic framework for executive branch decision-making on national security issues, consistent with the National Security Act of 1947, as amended. This directive established two senior-level interagency committees, known as the Principals and the Deputies Committees. The Principals Committee was the senior interagency forum for the consideration of policy issues affecting U.S. national security. The committee’s function was to review, coordinate, and monitor the development and implementation of national security policy. It was intended to be a flexible forum for Cabinet-level officials to meet to discuss and resolve issues not requiring the President’s participation. Members of the committee were as follows in 1999-2000: Assistant to the President for National Security Affairs (chair) Secretary of State (or Deputy Secretary) Secretary of Defense (or Deputy Secretary) U.S. Representative to the United Nations Director of Central Intelligence Chairman of the Joint Chiefs of Staff Assistant to the President for Economic Policy Assistant to the Vice President for National Security Affairs The Secretary of Treasury, the Attorney General, and other heads of departments and agencies were invited as needed. The Deputies Committee was the senior sub-Cabinet interagency forum for consideration of policy issues affecting U.S. national security. The committee’s function was to review and monitor the work of the interagency process and to focus attention on policy implementation. It assisted the Principals Committee by addressing policy decisions below the Principals’ level and was the main forum for making decisions on U.S. support for U.N. peacekeeping. Members of the committee were as follows in 1999-2000: Deputy Assistant to the President for National Security Affairs (chair) Under Secretary of State for Political Affairs Under Secretary of Defense for Policy Deputy Director of Central Intelligence Vice Chairman of the Joint Chiefs of Staff Deputy Assistant to the President for Economic Policy Assistant to the Vice President for National Security Affairs Other senior department and agency officials were invited as needed. Presidential Decision Directive 25 (Clinton Administration Policy on Reforming Multilateral Peace Operations), issued in May 1994, charged executive branch officials with making “disciplined and coherent choices” about when and under what circumstances to support or participate in these operations. It directed executive branch officials to consider a range of factors to determine operations’ political and practical feasibility when deciding whether to vote in the U.N. Security Council for proposed U.N. or U.N.-authorized peacekeeping operations. Directive 25 stated that (1) these factors were an aid in executive branch decision-making and did not constitute a prescriptive device and (2) decisions would be made on the cumulative weight of the factors, with no single factor necessarily being an absolute determinant. Table 3 lists the Directive 25 factors. Directive 25 instructed U.S. officials to apply additional factors when deciding whether to recommend to the President that U.S. personnel participate in proposed multilateral operations. For operations that were likely to involve combat, it directed U.S. officials to apply even more rigorous factors in their decision-making. Directive 25 assigned the State Department primary responsibility for managing and funding peacekeeping operations in which U.S. combat troops did not participate. It assigned the Defense Department primary responsibility for managing and funding those peacekeeping operations in which U.S. combat troops participated and for all peace enforcement operations. However, the Defense Department never actually received this responsibility. An interagency working group—known as the Peacekeeping Core Group—managed day-to-day Directive 25 decision-making and implementation for U.N. peacekeeping operations. This group was chaired by the National Security Council’s Senior Director for Multilateral and Humanitarian Affairs and consisted of assistant and deputy assistant secretaries of U.S. government Departments and agencies. Directive 56 (Managing Complex Contingency Operations), issued in 1997, guided executive branch officials in managing implementation of ongoing, smaller-scale contingency operations, including some multilateral peacekeeping operations. Directive 68 (International Public Information), issued in 1999, guided executive branch officials in coordinating public information activities in support of complex contingency operations, including multilateral peacekeeping operations. Directive 71 (Strengthening Criminal Justice Systems in support of Peace Operations and Other Complex Contingencies), issued in 2000, guided executive branch officials in improving U.S. response to the criminal justice aspects of peacekeeping operations to aid in the successful transition to durable peace and a timely exit of peacekeepers. Presidential Decision Directive 25 recognized that sustaining U.S. support for U.N. and multilateral operations requires that the Congress and the American people understand and accept the value of such operations as tools for advancing U.S. interests. Toward this end, Directive 25 stated that the “Congress must…be actively involved in the continuing implementation of U.S. policy on peacekeeping” and that the “Congress and the American people must…be genuine participants in the processes that support U.S. decision-making on new and on-going peace operations.” Directive 25 recognized that the executive branch traditionally “has not solicited the involvement of Congress or the American people on matters related to U.N. peacekeeping.” It concluded that this “lack of communication is not desirable in an era when peace operations have become numerous, complex, and expensive.” Directive 25 instructed executive branch officials to undertake six specific initiatives “to improve and regularize communication and consultation” with the Congress about U.N. peacekeeping to ensure that sufficient public and congressional support existed for proposed operations. Additionally, the Congress has enacted peacekeeping consultation and reporting requirements to enhance its ability to play a more effective role on these matters. The U.N. Participation Act of 1945, as amended, for example, requires the President to (1) consult with and provide information to the Congress in writing at least 15 days before the U.N. Security Council votes to authorize or expand U.N. peacekeeping operations and (2) consult monthly with the Congress on the status of U.N. peacekeeping operations, including anticipated operations. Table 4 summarizes the consultation, notification, and reporting requirements for U.N. peacekeeping operations. Our study is based on a review of eight executive branch decisions made between May 1999 and February 2000 to vote in the U.N. Security Council to authorize or expand the operations in East Timor, Sierra Leone, and the Democratic Republic of the Congo (see table 1). The Chairman of the House Committee on International Relations and the Chairman of the Subcommittee on the Middle East and South Asia, House Committee on International Relations, asked us to assess how executive branch officials used Presidential Decision Directive 25 in deciding to support the authorization or expansion of peacekeeping operations in these locations and how the officials consulted with the Congress about the decisions. Specifically, we assessed whether executive branch officials considered all applicable Directive 25 factors before making their decisions and identified shortfalls in any of these factors at the time the decisions were made and how the executive branch officials consulted with the Congress during the decision-making process, including the timing and content of the information provided. To assess whether executive branch officials considered all applicable Directive 25 factors, we collected and analyzed information from more than 200 National Security Council and State and Defense Department records related to these decisions. These records included summaries of conclusions of Deputies Committee and Peacekeeping Core Group meetings, decision memorandums, concept and briefing papers, and Directive 25 analyses (prepared for five of the eight decisions). We used a checklist of Directive 25 factors to collect information from these records about executive branch consideration and assessment of Directive 25 factors. We entered information into a database and analyzed it to determine whether executive branch officials (1) considered all Directive 25 factors before deciding to vote to authorize or expand U.N. operations, (2) identified Directive 25 shortfalls at the time they made their decisions, and (3) took actions to address identified shortfalls. To gain an understanding of the wider context in which these decisions were made, we supplemented this analysis by (1) reviewing several hundred other executive branch records, such as State and Defense Department intelligence analyses, and (2) discussing our analysis of the eight decisions with State and Defense Department and National Security Council officials. As we informed you several times, executive branch officials, citing deliberative process concerns, denied us full and complete access to records related to the eight decisions in our study, particularly records created during the earlier stages of the decision-making process.Although executive branch officials briefed us about some of the information in these records, as discussed in our auditing standards, this lack of full and complete access limited our ability to form independent and objective opinions and conclusions about the process used by U.S. decision-makers to weigh various assessments and arrive at an interagency position. As a result, we limited the scope of our study primarily to the outcome of the decision-making process—that is, whether executive officials considered Directive 25 factors in making decisions, not how they considered them and arrived at decisions. For example, although our analysis showed that State and Defense officials’ assessments of some Directive 25 factors differed at some points, we were unable to determine how executive branch officials reached consensus on these factors during the interagency process. Consequently, this report does not discuss such issues. Because most of the records we examined were classified, some of the information in this report is necessarily general. To assess executive branch consultations with the Congress about the eight decisions, we collected and analyzed information from both executive branch and congressional records. Executive branch records included State and Defense Department summaries of monthly and special briefings, notification letters and reports required by U.S. law, the State Department’s congressional contact log, and written statements of senior executive branch officials testifying before Senate and House committees. Congressional records included transcripts of monthly executive branch briefings for the Senate Committee on Foreign Relations and written statements by committee and subcommittee chairmen and other Members of Congress. We examined these records to determine whether executive branch officials had complied with the consultation and reporting requirements in Directive 25 and relevant laws. For example, we determined whether executive branch officials had notified the Congress in writing of their decisions before the U.S. Representative to the United Nations voted in the U.N. Security Council. We used a checklist of Directive 25 factors to collect information from executive branch and congressional records about executive branch consultations for the eight decisions. We entered this information into a database and analyzed it to determine the timing and content of information provided to the Congress. We conducted our work from March 2000 to July 2001 in accordance with generally accepted government auditing standards. The following tables present timelines of the key international and U.S. events leading up to the approval of the proposed U.N. and multilateral operations in East Timor, Sierra Leone, and the Democratic Republic of the Congo for the eight decisions we reviewed. Table 5 presents a timeline of key events leading up to the approval of the U.N. Mission in East Timor (UNAMET), the International Force in East Timor (INTERFET), and the U.N. Transitional Administration in East Timor (UNTAET). The shaded text highlights summaries of the mandates for these three operations. Table 6 presents a timeline of key events leading up to the approval of the U.N. Observer Mission in Sierra Leone (UNOMSIL), the U.N. Mission in Sierra Leone (UNAMSIL), and the expansion of UNAMSIL. The shaded text highlights summaries of the mandates for these three operations. Table 7 presents a timeline of key events leading up to the approval of the U.N. Organization Mission in the Democratic Republic of the Congo (MONUC) and the expansion of this operation (Phase II). The shaded text highlights summaries of the mandates for these two operations. Our analysis of executive branch records showed that, for the eight decisions we reviewed, executive branch officials worked to reduce risks and maximize the chances of operational success by taking steps to eliminate, or reduce the impact of, Presidential Decision Directive 25 shortfalls on the proposed operations. Before the Deputies Committee or U.N. Security Council approved the operations, executive branch officials worked to shape the proposed operations’ objectives, mandates, and forces to eliminate shortfalls or reduce their impact. Where such shortfalls could not be addressed before operations were approved, executive branch officials undertook various activities to reduce their operational impact. Table 8 shows some of the actions taken by executive branch officials to address Directive 25 shortfalls for the eight decisions we reviewed. In addition to the persons named above, Michael Rohrback, Zina Merritt, Richard Seldin, Rona Mendelsohn, and Lynn Cothern made key contributions to this report.
Presidential Decision Directive 25 states that U.S. involvement in international peacekeeping operations must be selective and effective. Toward this end, the directive established guidance that U.S. officials must consider before deciding whether to support proposed operations, including whether the operations advanced U.S. interests, had realistic criteria for ending the operations, and had appropriate forces and financing to accomplish their missions. The directive established these factors as an aid for executive decision-making and not as criteria for supporting particular operations. Executive branch officials thoroughly considered all Presidential Decision Directive 25 factors before deciding to support the authorization or expansion of peacekeeping operations in East Timor, Sierra Leone, and the Democratic Republic of the Congo. At the time the decisions were made, executive branch assessments identified at least one Directive 25 shortfall in all of the proposed operations and several shortfalls in six of them. Executive branch officials nonetheless decided to support the operations because they believed that these shortfalls were outweighed by the presence of other Directive 25 factors and various other factors, including U.S. interests in the region. Executive branch officials provided Congress with considerable information about the conflicts that the proposed operations were intended to address. However, GAO found no evidence that Congress was informed about most Directive 25 shortfalls identified in executive branch assessments of the proposed operations in East Timor and Sierra Leone or about U.S. plans to address the risks posed by these shortfalls. Congress was informed, about most shortfalls identified in executive branch assessments of the proposed U.N. operations in the Congo.
About 53,000 children died from a range of causes in the United States in 2007—the latest year for which national data were available—according to the Centers for Disease Control and Prevention (CDC). Major causes of death among children include conditions originating in the perinatal period, accidents (such as motor vehicle traffic accidents and drowning), congenital anomalies, homicide, and cancer. Of all children who died in fiscal year 2009, NCANDS estimates that 1,770 children died from various types of maltreatment. (See fig. 1.) Moreover, 81 percent of children who died from maltreatment were 3 years old or younger, and more than half were infants 1 year or younger. According to NCANDS, the estimated number of child maltreatment fatalities has increased nationally over the past 5 years, from 1,450 in fiscal year 2005 to 1,770 in fiscal year 2009. HHS reported that states believe this increase may be due, in part, to new state legislation, new procedures, and improved state reporting practices. Protecting children from maltreatment is primarily the responsibility of child welfare programs administered at the state and local levels. In all states, child protective services (CPS) are part of the child welfare system. CPS generally screens and responds to suspected child maltreatment reported to it by mandatory reporters—including police officers, doctors, teachers, and other professionals—as well as by neighbors and family members. In fiscal year 2009, professionals initiated 58 percent of all reports of suspected maltreatment to CPS. CPS investigators determine whether such reports are considered maltreatment under state laws or policies. CPS also typically determines whether interventions—such as placement with a foster family—are in the best interest of the child. When CPS determines that a child’s death is from maltreatment, CPS documents the case. The state’s child welfare department reports it to NCANDS. (See fig. 2.) At the federal level, most of the $8.4 billion in federal assistance dedicated to child welfare purposes ($7.2 billion) in fiscal year 2010 supports state child welfare programs, including foster care, adoption assistance, and child protection. HHS oversees funding provided to states that support child welfare programs, and provides technical assistance and training to states on a variety of child welfare issues. HHS has a technical assistance contract specific to NCANDS and also provides technical assistance on NCANDS and other data issues through its National Resource Centers (NRC). CAPTA is the key federal legislation focused on preventing and responding to child maltreatment. Reauthorized in 2010, CAPTA provides supports for, among other things, data collection activities and technical assistance on child maltreatment. It also authorizes federal funding to states for grants to support prevention, investigation, and treatment of child maltreatment. In fiscal year 2010, funding for CAPTA programs totaled about $97 million, of which $26.5 million was for basic state grants to improve CPS. These grants are distributed to states by formula, and may be used to improve CPS investigations, caseworker training, and prevention programs. All states in fiscal year 2010 received CAPTA basic state grants. To receive this grant, states are required to have an approved state plan that outlines the activities that the state intends to implement. It must include, for example, provisions or procedures for receiving and responding to allegations of child abuse or neglect and for ensuring children’s safety. For grant purposes, child abuse and neglect is defined as “at a minimum, any recent act or failure to act on the part of a parent or caretaker, which results in death, serious physical or emotional harm, sexual abuse or exploitation, or an act or failure to act which presents an imminent risk of serious harm.” Each state receiving a basic grant is also required to establish and support citizen review panels to evaluate the effectiveness of CPS policies, procedures, and practices, and, according to the National Center for Child Death Review, 14 states in 2003 reported that their child death review teams serve a dual function as CAPTA citizen review panels for child fatalities. The citizen review panels must be composed of volunteers who are “broadly representative” of the community, including members with expertise in the prevention and treatment of child abuse and neglect, and may include members of foster care review boards or child death review teams. Child death review teams exist in all but one state to review child abuse and neglect fatalities and suspicious child deaths. Results of these reviews may be used to improve services, advocate for change, and conduct public awareness activities, ultimately for the purpose of preventing future child maltreatment deaths. CAPTA defines the term “near fatality” as “an act that, as certified by a physician, places the child in serious or critical condition.” Although the term is defined, neither CAPTA nor the applicable regulations further discuss data collection on near fatalities. NCANDS does not have a specific data field that identifies the case as a near fatality from maltreatment. NCANDS collects and analyzes data on children involved in situations in which CPS either investigated an allegation of maltreatment or initiated an alternative response. State CPS agencies generally are responsible for submitting NCANDS data to HHS. Since 1996, states that receive basic state grants under CAPTA have been required to report annually— ”to the maximum extent practicable”—at least 12 data items to NCANDS on child maltreatment. Data from NCANDS are an important source of information for several publications, reports, and activities of the federal government, as well as for child welfare officials, researchers, and others. NCANDS data are compiled annually in the Child Maltreatment report, which, as of December 2010, has been issued annually since 1992. HHS issues the annual Child Welfare Outcomes: Report to Congress partly based on state submissions of NCANDS data. This report presents information to Congress on states’ performance on national child welfare outcomes, including NCANDS data on reducing the recurrence of child maltreatment and reducing child maltreatment in foster care. NCANDS data have also been incorporated into the Child and Family Services Reviews (CFSR). Finally, NCANDS data are used to help assess the performance of several HHS programs in accordance with the Program Assessment Rating Tool. More children have likely died from maltreatment than are reflected in the national estimate of 1,770 child fatalities for fiscal year 2009. According to our survey, child welfare officials in 28 states thought that the official number of child maltreatment fatalities in their state was probably or possibly an undercount. Child welfare experts and HHS officials we spoke with also thought that national estimates did not reflect the full extent of children’s deaths from maltreatment and that undercounting was an issue with child fatalities. Acknowledging the limitations of NCANDS data on child maltreatment fatalities, HHS’s Child Maltreatment 2009 report states that NCANDS fatality data are only a proportion of all child fatalities caused by maltreatment. These data are based on reports provided to NCANDS by CPS agencies within state child welfare departments. A major reason for the likely undercounting of child maltreatment fatalities is that nearly half of states report to NCANDS data only on children already known to CPS agencies—yet not all children who die from maltreatment were previously brought to the attention of CPS. Some children may not have been previously maltreated, or their earlier maltreatment may not have been noticed or reported to CPS agencies. Child deaths from maltreatment are recorded in many state and local data sources, such as death certificates from state vital statistics offices and medical examiner or coroner’s offices, CPS records, and state and local child death review team records (see fig. 3), and in Federal Bureau of Investigation (FBI) Uniform Crime Reports at the federal level. Because of this, HHS also attempts to capture the fatalities of maltreated children who were not previously known to state CPS agencies. Specifically, HHS instructs states on how to report data from non-CPS agencies and encourages states to obtain information on child maltreatment fatalities from other state agencies. However, in responding to our survey, 24 states reported that their 2009 NCANDS data did not include child fatality information from any non-CPS sources. More specifically, for example, 43 states responded that their NCANDS data did not include child fatality data from the vital statistics department. (See fig. 4.) Since NCANDS is a voluntary data-reporting system, state CPS agencies cannot be required to obtain information from other state agencies, according to HHS officials. Synthesizing information about child fatalities from multiple sources can produce a more comprehensive picture of the extent of child deaths than sole reliance on CPS data. In our review of research assessing whether the number of child fatalities from maltreatment was accurate, we found that key sources of information undercounted child deaths, sometimes by significant amounts. For example, a peer-reviewed study of fatal child maltreatment in three states found that state child welfare records undercount child fatalities from maltreatment by from 55 percent to 76 percent. The data sources analyzed in this study were death certificates, state child welfare agency records, state child death review team data, and law enforcement reports to the FBI Uniform Crime Report system. The study found that each data source reviewed undercounted the total number of child maltreatment fatalities. However, more than 90 percent of the child fatality cases could be identified by linking any two of the data sources, demonstrating the value of using multiple existing data sources to determine the extent of child fatalities from maltreatment. The study also found that the multidisciplinary child death review team process may be the most promising approach to identifying deaths from maltreatment if there is a standardized data collection and reporting system in place. Using a different methodology, HHS’s most recent National Incidence Study of Child Abuse and Neglect (NIS-4)—issued in January 2010— estimated 2,400 child deaths from maltreatment in the study year spanning portions of 2005 and 2006. The NIS is a congressionally mandated, periodic effort of HHS to estimate the incidence of child abuse and neglect in the United States. Unlike NCANDS, which relies primarily on CPS data reported by states, the NIS-4 relies on multiple sources of child death information. The NIS-4 used a nationally representative sample of 122 counties to create national estimates of the incidence, severity, and demographic distribution of child maltreatment, including fatalities from maltreatment. The NIS-4 uses two standardized research definitions of maltreatment in developing its findings. In each county, NIS-4 collected CPS data as well as reports of child maltreatment cases that came to the attention of community professionals in the county sheriff’s office; the county departments of juvenile probation, health, and public housing; municipal police departments; hospitals; public schools; day care centers; shelters; and voluntary social services and mental health agencies. Furthermore, several factors complicate the ability to obtain comprehensive information on child fatalities from maltreatment. As a result, it can be difficult to compare child fatality data across states or over time. Inconsistent definitions of maltreatment: Although CAPTA legislation establishes a minimum standard for the definition of child abuse and neglect, states generally develop their own variations of these definitions. Consequently, child maltreatment data at the national level can reflect an underlying inconsistency across individual states. For example, some states add medical neglect to the CAPTA definition and define the concept differently. (See table 1.) Some experts we interviewed said that definitions need to be standardized nationally to improve the quality of NCANDS data. When states submit data to NCANDS, HHS requires them to align state definitions of child maltreatment with elements of the NCANDS definitions, using a data-mapping process. HHS officials told us this mapping process helps create more consistent data within NCANDS. However, the mapping process may not fully address underlying state differences in determining whether a child’s death was regarded as a maltreatment death. HHS officials told us they considered definitional variations less important as a factor affecting NCANDS data quality than the difficulty in obtaining agreement among various local and state investigators—such as law enforcement and medical personnel—that maltreatment was the cause of a child’s death.  Differing legal standards for substantiating maltreatment: Because states have different legal standards for substantiating maltreatment, it is difficult to compare data across states. The substantiation process generally requires child welfare caseworkers to decide whether an allegation of maltreatment, or the risk of maltreatment, meets the criteria established by state law or policy. In a Congressional Research Service (CRS) analysis, state standards for substantiating child maltreatment were categorized into three groups, ranging from least to most rigorous. CRS found that states with stricter standards for substantiating maltreatment have the lowest rates of child maltreatment. (See table 2.)  Missing data: Some states do not report any information on child fatalities in certain years (e.g., Alaska, Massachusetts, and North Carolina for fiscal year 2009). Additionally, some states do not report particular data elements. For example, in fiscal year 2009, 13 states did not report information on children who died who, within the past 5 years, had been in foster care and had been reunited with their families; 7 states did not report the relationship of the perpetrator to the child who died; and 6 states did not report the race or ethnicity of the child who died. In responding to our survey, states provided a range of explanations for missing data in their NCANDS submissions. For example, according to state child welfare officials, key reasons for their not reporting some data were that other state entities, not child welfare, collected the information; state data systems did not collect those data; and delays occurred in data collection that affected reporting.  Lack of death date: NCANDS does not ask states to identify the date of a child’s death, and establishing maltreatment as the cause of a child’s death can take many months, particularly when a criminal proceeding is involved. As a result, child deaths reported to NCANDS may have, in fact, occurred earlier than the year in which they are reported. NCANDS collects more data on the circumstances surrounding child fatalities than are reflected in HHS’s annual Child Maltreatment report— information that could be useful for prevention. NCANDS collects information from state CPS agencies about the demographics of children who died, such as their age and race; the report of maltreatment and the CPS agencies’ response and investigation; the perpetrator; services provided to the family; and risk factors associated with the child and with the caretaker. It also collects information on broad categories of maltreatment—such as neglect, physical abuse, sexual abuse, psychological maltreatment, and medical neglect—although it does not collect more detailed information on how a child dies, such as from a bathtub incident or swimming pool drowning resulting from a parent’s neglect. However, HHS does not report some information it collects on the circumstances surrounding child fatalities. For example, when we analyzed unpublished fiscal year 2009 state data reported to NCANDS on children’s deaths from maltreatment, we found the following:  Types of abuse: Rates of physical abuse were slightly higher among older children who died from maltreatment (ages 8 to 18), while neglect rates were slightly higher among younger children who died from maltreatment (ages 7 and younger).  Child welfare history: At least 14 percent of children who died from maltreatment had a previous substantiated or indicated incident of child maltreatment.  Sixteen percent of perpetrators of fatal child maltreatment were previously involved in an incident of child maltreatment that was either substantiated or indicated by CPS.  Among parents who were perpetrators, about 60 percent were female. Of unmarried partners who were perpetrators, 90 percent were male.  Child’s risk factors: Two percent of maltreated children who died had a disability such as a developmental disability, an intellectual disability, or a visual or hearing impairment. According to experts, detailed information on the circumstances surrounding child fatalities can provide a more comprehensive understanding of the issue of fatal child maltreatment, such as revealing patterns that could aid prevention efforts. In addition to what is known nationally through NCANDS data, extensive information on the circumstances surrounding children’s deaths from maltreatment is collected by the Child Death Review Case Reporting System (CDR Reporting System), operated by the nongovernmental National Center for Child Death Review (NCCDR). NCCDR serves as a resource center for state and local multidisciplinary teams that review cases of child deaths for the purpose of improving case identification, investigations, services, follow-up, and prevention. Nearly all states have child death review teams comprising CPS workers, prosecutors, law enforcement, coroners or medical examiners, public heath care providers, and others. While data received from NCCDR are more detailed in each case, the data are less comprehensive than those reported to NCANDS, according to HHS. Local review teams do not review all cases of possible death due to maltreatment but rather vary in their roles and scope from locality to locality. NCCDR is funded largely by the Maternal and Child Health Bureau of the Health Resources and Services Administration (HRSA). Begun in 2005, NCCDR’s Web-based CDR Reporting System is potentially a rich source of multistate data on child fatalities from all causes, including child maltreatment. As of June 1, 2011, 39 states had data use agreements with NCCDR, according to NCCDR officials. NCCDR’s goal is to eventually have all state child death review teams provide information on child fatalities to the data system, according to these officials. NCCDR takes a public health approach to child death review, with a focus on improving investigations and identifying modifiable risk factors and strategies for preventing similar future deaths. According to NCCDR, most states using the system analyze their data and publish annual reports. Although NCCDR conducts in-house analyses for federal partner organizations, such as the National Highway Traffic Safety Administration, according to NCCDR officials, or of sudden cardiac deaths for a hospital, CDR data on child maltreatment deaths have not yet been synthesized or published, according to the NCCDR director. (The sidebar describes the CDR data-reporting form.) Challenges faced by local investigators, such as law enforcement officials, medical examiners, and CPS staff, in determining whether a child’s death was caused by maltreatment make it difficult for states to collect complete data on child maltreatment fatalities. These investigative challenges include lack of definitive medical evidence, limited resources for testing, differing expertise and training, and inconsistent interpretations and application of maltreatment definitions.  Lack of definitive medical evidence: Without definitive medical evidence, it can be difficult to determine that a child’s death was caused by abuse or neglect. According to our survey, 43 states indicated that medical issues were a challenge in determining child maltreatment. (See fig. 5.) For example, investigators we spoke with in California said that determining the cause of death in cases such as sudden unexplained infant death is challenging because the child may have been intentionally suffocated but external injuries are not readily visible. Similarly, a medical examiner we interviewed in Michigan said that it is a challenge to appropriately determine the cause of death for babies who may have been shaken to death or suffocated. According to experts we spoke with, a lack of evidence also makes it difficult to determine whether a death was caused by neglect. Medical neglect is a type of maltreatment caused by failure of the caregiver to provide for the appropriate health care of the child despite having the resources—financial or otherwise—to do so. Medical neglect often results from inattentiveness to a chronic illness or missing follow-up medical appointments, according to a physician from the American Academy of Pediatrics (AAP) Committee on Child Abuse and Neglect. For example, one expert told us that a medically fragile premature infant who is discharged from the hospital but not brought back in for a follow-up examination and later dies could be considered to have died from medical neglect. Experts from the American Bar Association’s (ABA) Center on Children and the Law said neglect deaths are often categorized incorrectly, which may contribute to the problem of undercounting deaths from neglect. County officials we spoke with in Michigan added it is very difficult to determine medical neglect as the cause of death because the death can appear to have been from “natural” causes.  Limited resources for testing: Another challenge in determining whether maltreatment was the cause of death is resource constraints that can limit the ability to conduct autopsies and medical tests. According to experts we spoke with from AAP, an autopsy provides much information on the factors contributing to a child’s death—such as infection, trauma, or congenital heart disease—that cannot be determined based on visual inspection. These experts indicated that financial constraints of local and state governments are the primary reason autopsies are not conducted more regularly. In Pennsylvania, a county coroner told us that even though autopsies can help clarify the cause and circumstances of a death, coroners have to make difficult choices in deciding when to order autopsies since they are expensive and there is limited funding to cover them. According to a 2009 report by the National Academy of Sciences, insufficient funding for testing influences cause-of-death determinations. A law enforcement official we spoke with in Michigan noted that only 6 of his 20 requests for DNA testing were granted because of recent state cutbacks affecting crime laboratories. In our survey, 36 states identified limited resources as a challenge to identifying and investigating maltreatment deaths. (See fig. 5.)  Differing expertise and training: Differing levels of investigator expertise—particularly among those charged with determining the cause and manner of death—also present challenges to states in collecting child maltreatment fatality data. The National Academy of Sciences notes that the skill and training of coroners and medical examiners vary greatly. For example, in some counties, medical examiners—who are physicians and typically receive death investigation training—are charged with determining the cause and manner of death, including identifying maltreatment, while other counties rely on a coroner—who may or may not be a physician or have had any medical training—to make these determinations. A medical examiner and a coroner we spoke with in California noted that because of differing expertise and training, forensic pathologists and medical examiners might categorize sudden infant deaths differently. In 1996, CDC developed a protocol for sudden infant deaths in an effort to standardize reporting these deaths (see sidebar). While training can enhance skills for conducting maltreatment investigations, 35 states identified limited investigator training as a challenge in our survey. (See fig. 5.) County officials in the three states we visited also told us that a lack of funding contributes to limited training opportunities. However, training opportunities were available in the states we visited. For example, state officials in California told us that all CPS staff are trained to recognize and report child abuse. County officials also said coroners in the state receive annual training that includes case presentations by investigators and forensic pathologists, which often include child deaths. Inconsistent interpretations and application of maltreatment definitions. Differing interpretations and application of maltreatment definitions by investigators can lead to inconsistent determinations of cause of death. Law enforcement officials we spoke with in California noted that law enforcement officials and coroners sometimes disagree on the manner or cause of death, for example, when the death is suspected to be from natural causes but there is some indication of abuse or neglect. In our survey, 29 states indicated that the level of agreement among responsible entities—such as law enforcement officials, medical examiners or coroners, and CPS—about how to interpret and apply state definitions of child abuse or neglect was a challenge for collecting information on child maltreatment fatalities. (See fig. 5.) These entities may use their own definitions and have different goals. For example, county officials in Michigan told us that law enforcement investigates for the purpose of determining probable cause for prosecution, while CPS investigates to determine if there is a preponderance of evidence for maltreatment. AAP experts stated that certain injuries—such as abusive head trauma—are often incorrectly categorized on child death certificates as natural or accidental when the real cause of death is abuse-related. It is also difficult to distinguish at autopsy between sudden infant death syndrome (SIDS) and accidental or deliberate suffocation with a soft object, according to the AAP. In our survey, 33 states indicated that variations across counties and other jurisdictions in identifying cause of death pose a challenge for collecting fatality information. For example, child death review team officials in Pennsylvania noted significant variability across counties in identifying child maltreatment deaths from head trauma. Similarly, state officials in California noted that some counties interpret co- sleeping deaths as maltreatment, while other counties do not, which creates inconsistencies in the numbers of child maltreatment deaths at the state level. Officials we interviewed in Michigan told us that when an external agency cross-checked its 2005 CPS data with medical records for 186 cases, the analysis indicated that 37 child deaths labeled as natural, accidental, or undetermined should have been documented as maltreatment. This variability across counties can result in greater data inconsistencies in states where the child welfare agency is county-administered with state supervision, as opposed to a state-administered system, according to national child welfare advocates. While 11 states indicated in our survey that their child welfare program was county- or locally administered, some of these states have large child populations, including California, New York, Ohio, and Pennsylvania. State child welfare officials indicated experiencing challenges coordinating among geographic jurisdictions within the state and across state lines. In our survey, 37 states indicated that the level of coordination among different jurisdictions poses a challenge for obtaining information on child maltreatment fatalities. (See fig. 6.) For example, a local CPS official in Pennsylvania told us that it can be difficult for CPS to track children when families cross county lines. State officials we interviewed in Michigan also indicated that counties face challenges obtaining medical records and death certificates from jurisdictions in another state when children are taken across state borders to the nearest trauma center in the interest of providing immediate care. States also indicated that limited coordination with other state agencies— particularly obtaining records from the health department—can challenge their ability to report information on child maltreatment fatalities to NCANDS. According to our survey, 32 states faced challenges coordinating among state agencies. Twenty-four states indicated that agencies involved in collecting information on child maltreatment fatalities do not generally or easily share information, and 23 states cited confidentiality or privacy issues related to child maltreatment as a challenge. (See fig. 6.) For example, child welfare officials in California told us their department had restricted data sharing with the department of public health after a security breach, and had only recently renewed its data-sharing agreement. Michigan officials specifically identified confidentiality and privacy restrictions as a challenge to obtaining child maltreatment fatality data because stakeholder agencies, such as the health department, are sometimes unsure what, if any, information they can share with child welfare. Furthermore, state officials in Pennsylvania told us that state and county child welfare officials are concerned about their limited access to records from drug and alcohol programs—which can include cases involving parents of a child who died—held by another state agency. California has coordinated across multiple agencies in an effort to produce a more accurate estimate of child maltreatment fatalities (see sidebar). States indicated that several issues related to their data systems— especially those affecting electronic capabilities—have affected the completeness of child maltreatment fatality data they report to NCANDS. For example, although Pennsylvania collects certain CAPTA data elements, the state is unable to aggregate and report to NCANDS some of the information received from counties because this information is not recorded electronically, according to state officials. The inability to link different agencies’ data systems with each other was also cited as a reporting challenge by 28 states. (See fig. 7.) States also experienced challenges reporting to NCANDS when they were either converting from one data system to another or updating their current system. According to our survey, 9 states were challenged by piloting or implementing a new child welfare information system, and the Child Maltreatment 2009 report shows that multiple states had incomplete or incomparable data because of system conversions. For example, Michigan was unable to submit data on child fatalities to NCANDS for fiscal year 2008, according to a state official, because of data errors associated with conversion to a new data system. In addition, 27 states responding to our survey reported that data entry errors posed a challenge for reporting child maltreatment fatality data to NCANDS. (See fig. 7.) To help mitigate these and other challenges, states are implementing quality controls on the child maltreatment fatality data they submit to NCANDS. According to our survey, 34 of the 50 states responding to this question indicated that their child welfare department had a quality control process—aside from HHS’s Enhanced Validation and Analysis Application (EVAA), which assesses the quality of state data—to improve the accuracy of child maltreatment fatality data. HHS provides assistance to states in several ways to help them report information on child maltreatment to NCANDS. NCANDS is supported by a technical team, composed of Children’s Bureau and contractor staff, that provides technical assistance and tools to states for reporting child maltreatment fatality data. There is also an NCANDS State Advisory Group that worked closely with the technical team to design and implement NCANDS and now continues to meet annually to review and update NCANDS collection and reporting processes. According to HHS, this 20-member group helps ensure that enhancements to NCANDS accurately reflect states’ experiences collecting data. The NCANDS technical team also hosts the NCANDS Annual State Technical Assistance Meeting, a key means of assistance to states in which HHS officials provide NCANDS training and updates and states share questions and information. In 2010, child welfare representatives from 38 states participated in this 3-day meeting, which included workshops on data validation, error reporting, and methods for improving the quality of data provided to NCANDS. In our survey, 36 state officials reported that these annual NCANDS meetings were moderately helpful to very helpful. The NCANDS technical team has also developed Web-based resources with information and guidance to states on NCANDS data reporting, available through the NCANDS Web portal. The NCANDS portal is the key interface between states and the NCANDS technical team, and includes guidelines about reconciling and submitting data. The portal also contains an NCANDS Listserv where state officials can share information and obtain peer-to-peer assistance, according to HHS officials. States can also obtain individualized NCANDS technical assistance upon request. Each state has an assigned NCANDS technical team liaison who can provide targeted information and support to help states report data to NCANDS. During the 2010 data-reporting process, all states were in communication with their NCANDS technical team liaisons, according to an NCANDS report. In our survey, state officials reported high levels of satisfaction with the technical teams’ assistance, with 29 of the 50 states responding to this question identifying the help they received as moderately helpful to very helpful. State officials can also request on-site technical assistance regarding data collection and reporting from the National Resource Center for Child Welfare Data and Technology. HHS also provides assistance to states’ child death review teams through NCCDR. NCCDR serves as a resource for state or local child death review teams. NCCDR helps states share information by publishing their child death review teams’ contact information, data, and annual reports on its Web site. In addition, NCCDR has developed a Web site designed to help child death review teams expand their prevention efforts. It offers best practices for preventing the leading causes of injury and death among children, including child abuse. The site contains links to resources, partners, and a number of injury prevention strategies including public education; legislation and policy changes; and modifications to products, physical environments, and social environments that have been rated according to their evidence-based effectiveness. Although NCCDR regularly collaborates with federal organizations to analyze child fatality data and develop strategies to prevent child deaths, there has been little routine information sharing between NCCDR and NCANDS on child maltreatment fatalities. Federal organizations such as CDC, the Department of Defense, and the National Highway Traffic Safety Administration have collaborated with NCCDR to analyze information and expertise about child death reviews and develop prevention strategies, according to NCCDR officials. For example, in 2003, CDC developed an initiative to improve data collected on sudden unexplained infant deaths (SUID) and develop prevention strategies by monitoring trends and identifying risk factors. CDC partnered with NCCDR to develop the SUID Case Registry Pilot Study, which utilized an updated version of NCCDR’s Web-based data collection system. Officials from NCCDR and the Children’s Bureau, under HHS’s Administration for Children and Families (ACF), meet periodically in workgroups, and officials from the Children’s Bureau told us that they refer states with questions about child death reviews to NCCDR for assistance. In 2010, officials from NCCDR and the ACF Commissioner met to explore ways to enhance federal responses to child abuse deaths, and the ACF Commissioner told us that they are moving forward to fund a child fatality review conference and begin an initiative to examine evidence-based practices for preventing child abuse deaths. However, NCCDR and NCANDS officials acknowledged that, to date, they have not routinely coordinated on child maltreatment fatality data or prevention strategies. Although HHS provides a variety of assistance to states on how to report data to NCANDS, state officials indicated a need for additional assistance collecting child fatality as well as near-fatality data to use for prevention efforts. In our survey, almost half of states (23) reported needing additional assistance in collecting information and reporting data on child maltreatment fatalities or near fatalities. For example, several states mentioned that assistance with multidisciplinary coordination could help them overcome difficulties such as obtaining death certificates from medical examiners’ or coroner’s offices. HHS recognizes that collecting maltreatment fatality data from multiple sources results in more complete data, so the agency encourages states to coordinate with other organizations, such as medical examiners and departments of health. HHS officials stated that this is often a topic of discussion at the NCANDS annual meeting. However, HHS officials also noted that the agency cannot require states to use additional data sources, and states are not required to disclose whether they consulted with additional sources to collect data. Although the federal government does not currently collect data on children who nearly die from maltreatment, states reported wanting assistance to collect and use this information. CAPTA defines a near fatality as “an act that, as certified by a physician, places the child in serious or critical condition.” HHS officials believe that such cases are most likely reported generally under maltreatment, but are not specifically identified as near fatalities because NCANDS does not have a data field identifying the case as a near fatality. HHS officials said it would be difficult to operationalize a national definition. To add a near-fatality data element to NCANDS, HHS would need to coordinate with the State Advisory Group and obtain approval from the Office of Management and Budget (OMB). However, the entire NCANDS data form will need to be reapproved in 2012, and HHS officials stated that at that time all NCANDS data elements will be reexamined. In commenting on a draft of this report, HHS stated that it had initiated consultations with the states on how to best address data collection on near fatalities of children and that HHS is considering adding a field to identify these specific cases. States are increasingly interested in collecting and using information on near fatalities, according to HHS officials, and some states have already begun this effort. Collecting data on maltreatment near fatalities was a topic of discussion at the 2010 NCANDS Annual State Technical Assistance Meeting. Additionally, the NCANDS Listserv was recently used by two state officials to survey other states about how they review and define near-fatality cases of maltreatment. Currently, states’ definition of a near fatality varies (see fig. 8), and to establish a near-fatality data element in NCANDS, states may need to reexamine their existing definitions. According to our survey results, 32 states have a state law, statute, or policy that defines a near fatality, and 19 states already collect data on the number of child near fatalities from maltreatment. In addition, some states obtain information on the circumstances of child maltreatment near fatalities, such as the child’s age and ethnicity, the child’s relationship to the perpetrator, and whether the child was receiving foster care or family preservation services. States predominantly use child maltreatment fatality and near-fatality data to develop strategies for preventing these occurrences, and state officials told us they would like more assistance to use this information for prevention. States reported in our survey that child maltreatment fatality data are often used to inform prevention strategies, make state-level child welfare policy changes, and allocate funding or other resources for prevention activities. In addition, states reported using the information they collect on child maltreatment near fatalities to inform or implement strategies for preventing maltreatment fatalities and to allocate funding or other resources for prevention activities. For example, as a result of trends associated with fatal maltreatment and crying infants, many states have developed public awareness campaigns, resources for parents, and other interventions to prevent shaken baby syndrome (see sidebar). HHS officials confirmed that states were increasingly interested in receiving technical assistance on how to use child fatality data to meaningfully inform prevention efforts. State officials also reported wanting more information from other states on best practices in general and on using data for prevention efforts in particular. In conclusion, children’s deaths from maltreatment are especially distressing because they involve a failure on the part of adults responsible for protecting them. Child welfare policymakers and practitioners rely on child maltreatment fatality data—voluntarily reported by states—to understand the extent and circumstances of these tragic deaths and to develop strategies to prevent them. At the state level, obtaining comprehensive data on child maltreatment fatalities is very challenging and requires information sharing among state and local agencies—each with its own policies, types and levels of expertise, and concerns. Yet such cooperative efforts are a work in progress, and assistance from HHS to help states collect and report more comprehensive child fatality data is important. At the federal level, to the extent that HHS collects but does not publish information on child maltreatment fatalities, or does not routinely share information on child fatality data analyses, opportunities may be lost to identify effective means of preventing child maltreatment deaths in the future. Finally, without national data on children’s near fatalities from maltreatment, we are unable to have a clear picture of the extent of near fatalities and the risk factors associated with such maltreatment, making it difficult to develop prevention strategies. As a society, we should be doing everything in our collective power to end child deaths and near deaths from maltreatment, and the collection and reporting of comprehensive data on these tragic situations is an important step toward that goal. To improve the comprehensiveness, quality, and use of national data on child fatalities from maltreatment, the Secretary of HHS should take the following four actions 1. Identify ways to help states strengthen the completeness and reliability of data they report to NCANDS. These efforts could include identifying and sharing states’ best practices, particularly those that foster cross-agency coordination and help address differences in state definitions and interpretation of maltreatment and/or privacy and confidentiality concerns. 2. Expand, as appropriate, the type and amount of information HHS makes public on the circumstances surrounding child fatalities from maltreatment. 3. Use stronger mechanisms to routinely share analyses and expertise with its partners on the circumstances of child maltreatment deaths, including insights that could be used for developing prevention strategies. 4. Estimate the costs and benefits of collecting national data on near fatalities and take appropriate follow-up actions. We provided a draft of this report to HHS for review and comment, and HHS’s comments are reproduced in appendix IV. We also provided a draft of this report to the Department of Justice (DOJ) and pertinent excerpts to NCCDR. DOJ and NCCDR provided technical comments, which we incorporated as appropriate. In its comments, HHS agreed with our recommendations to improve the comprehensiveness, quality, and use of national data on child fatalities from maltreatment. HHS also provided technical comments and additional information about activities under way or planned, which we incorporated as appropriate. For example, HHS stated that it has initiated conversations with the states to improve the identification of cases that involve near fatalities and that it plans to include two additional analyses on child fatalities in the Child Maltreatment report in 2013. While we recognize that HHS has some activities under way pertinent to issues raised in our report, more can be done to address these issues, such as by using stronger mechanisms to routinely share information and expertise on child fatalities from maltreatment. For example, although HHS cites the Federal Inter-agency Work Group on Child Abuse and Neglect as a mechanism already in place for sharing information, HHS officials previously told us that this workgroup has not often discussed child fatalities from maltreatment. Since having mechanisms is a starting point for information sharing, we clarified our recommendation to emphasize the importance of putting such means to routine use. HHS also noted that NCANDS data collection has always been voluntary, as our report acknowledges. In its comments, HHS also raised concerns about the nationwide Web- based survey of child welfare administrators—one of several methodologies used for this report—noting that it had several limitations. According to HHS, survey completion was typically delegated to subordinates, which can create inconsistencies in the types of respondents and data collected; the staff person responding may not have considered information from other divisions; and finally, states provided self-reported information and thus GAO cannot validate it. For the most part, these observations would apply to any survey in which the respondent is answering the survey questions as a representative of an organization rather than as an individual. We took several precautions to minimize these limitations. For example, before activating the survey, we confirmed that the state officials listed were correct for completing the survey; obtained comments on the survey draft from three experts, in addition to conducting pretests with state officials; and provided respondents ample time for consultation with other state officials as needed. We received responses from all states. While survey data are not typically verified independently, in our judgment the precautions taken to address survey limitations are sufficient for our purposes. (App. I provides information on our survey methodology.). As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from its issue date. At that time, we will send copies of this report to relevant congressional committees, the Secretary of Health and Human Services, the Attorney General of the United States, and other interested parties. The report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7215 or brownke@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix V. To obtain state perspectives on our objectives, we conducted a Web- based survey of child welfare administrators in the 50 states, the District of Columbia, and Puerto Rico. The survey was conducted using a self- administered electronic questionnaire posted on the Web. HHS provided us with names and contact information for state child welfare administrators. We contacted child welfare administrators via e-mail announcing the survey and sent follow-up e-mails to encourage responses. The survey data were collected between October and December 2010, with child welfare officials from every state, the District of Columbia, and Puerto Rico responding. The survey included questions about state laws related to child maltreatment, child welfare department coordination with other agencies or entities, state challenges related to identifying and collecting information on child maltreatment fatalities and reporting these data to NCANDS, child death review teams, state challenges related to collecting information on child maltreatment near fatalities, and federal assistance from HHS to states on data collection and reporting. We worked with agency officials and experts to develop the survey. Because this was not a sample survey, there are no sampling errors. However, the practical difficulties of conducting any survey may introduce errors, commonly referred to as nonsampling errors. For example, differences in how a particular question is interpreted or in the sources of information that are available to respondents can introduce unwanted variability into the survey results. We took steps in the development of the survey, data collection, and data analysis to minimize these nonsampling errors. For example, prior to administering the survey, we pretested the content and format of the survey with four states (Arizona, Kansas, New York, and Wisconsin) to determine whether (1) the survey questions were clear, (2) the terms used were precise and accurate, (3) respondents were able to provide the information we were seeking, and (4) the questions were unbiased. We chose these pretest states based on a number of factors, including recommendations from HHS officials or experts, whether the state collected information on near fatalities from maltreatment, whether the state had a state-level child death review team, and overall child population, among others. We made changes to the content and format of the final survey based on pretest results. Because this was a Web-based survey in which respondents entered their responses directly into our database, there was a reduced possibility of data entry error. We also performed computer analyses to identify inconsistencies in responses and other indications of error. In addition, an independent analyst verified that the computer programs used to analyze these data were written correctly. To identify research that estimated the number of child deaths from maltreatment in the United States and the extent to which these deaths are accurately captured, or undercounted, we searched ProQuest, Dialog Social Science Databases, NTIS, SocAbs, Nexis Statistical Master File, and MEDLINE. We also asked researchers and subject matter experts to identify studies. We selected 19 studies that had been published after 2000; had a focus on the child fatality data collection process in the United States; had a state or national, rather than county-level, focus; and focused on child maltreatment fatalities, not abuse and neglect. For each selected study, we determined whether the study’s findings were generally reliable. Two GAO social science analysts assessed each study’s research methodology, including its research design, sampling frame, selection of measure, data quality, limitation, and analytic techniques for its methodological soundness and the validity of the results and conclusions that were drawn. To identify the extent to which HHS collects and provides comprehensive information on child fatalities from maltreatment, we obtained and analyzed NCANDS data from the National Data Archive on Child Abuse and Neglect (NDACAN) at Cornell University. NDACAN prepares data and documentation for secondary analysis, and disseminates the datasets to researchers. We obtained the NCANDS datasets for federal fiscal year 2009 from NDACAN for our analysis. The NCANDS datasets consist of files in three formats: the child file, the agency file, and the summary data component (SDC). The child file dataset is the case-level component of NCANDS that contains child-specific data of all state CPS investigations or assessments of alleged child maltreatment that received a disposition during fiscal year 2009. Fifty states submitted the child file in fiscal year 2009, including the District of Columbia and Puerto Rico. The agency file is the NCANDS state-level component, which is submitted by states that submit the child file. The agency file contains aggregated state-level data that have been requested by CAPTA that are not able to be collected at the case level. This includes data on preventative services, CPS workload, and child fatalities not reported at the case level in the child file. For fiscal year 2009, 50 states submitted the agency file. States that are unable to submit case-level data submit the SDC file. The SDC consists of aggregated state-level statistics of key items in the child file and agency file. (Two states submitted the SDC for fiscal year 2009.) Both states and NDACAN take steps to protect confidentiality. States encrypt all identification variables submitted to NCANDS to prevent tracing a child file record back to the record in the state’s child welfare information system. For records involving a fatality, NDACAN recodes certain variables to mask information, including the state, county of report, information about the child, and perpetrator identification. We analyzed a subset of fiscal year 2009 NCANDS child file cases in which a child maltreatment fatality had occurred (i.e., those in which the maltreatment death data element was equal to 1 or “yes”). Data elements that were analyzed included age, sex, maltreatment type, and perpetrator characteristics. In addition to the analysis of fiscal year 2009 child file cases in which a maltreatment death had occurred, we analyzed four variables each from the fiscal year 2009 agency file and SDC. These four variables were the number of child maltreatment fatalities, foster care deaths, children whose families had received family preservation services in the 5 years prior to fiscal year 2009, and children who had been in foster care and were reunited with their families in the 5 years prior to fiscal year 2009. These agency file and SDC variables were summed with the equivalent child file variables to yield complete totals. We assessed the reliability of the NCANDS data provided by NDACAN by conducting electronic testing; reviewing documentation on the NCANDS data; and interviewing officials from NDACAN, the NCANDS contractor (Walter R. McDonald & Associates), and the Children’s Bureau of HHS to clarify data elements and procedures for data collection and reporting. To verify the number of unduplicated fatalities due to child maltreatment, we compared our assessment with the analysis done by NDACAN researchers. The NCANDS data were found to be sufficiently reliable for the purposes of this engagement. To examine the extent to which HHS collects and provides comprehensive information on child fatalities from maltreatment, we requested and obtained state child death review team data from NCCDR’s Child Death Review (CDR) Case Reporting System. The CDR Case Reporting System is a Web-based application that allows local and state users to enter case data and access and download their data via the Internet on a continual and voluntary basis. In 2009, state and local child death review teams in 26 states submitted data to the CDR Case Reporting System. These data contain detailed information on the child welfare history of victims, including the number of CPS referrals and substantiations per child, whether there was an open CPS case at the time of death, and whether any siblings were ever put in foster care. The database contains extensive information on the incident that led to the death, including the place of the incident, such as the child’s home, and the type of injury that caused the death, such as a weapon or drowning. The system also collects information on acts of commission or omission for every death entered into the system, regardless of cause or manner. To confirm the reliability of these data, social science methodologists at GAO reviewed documentation about the collection and reporting of NCCDR data. We also interviewed several NCCDR officials who were responsible for these data and HHS officials responsible for the cooperative agreement with NCCDR. In addition, we compared NCCDR data on child fatalities with NCANDS data on child fatalities in the NCCDR states. Although these data were not sufficiently reliable to support a finding, they were reliable for providing background context and examples of the possible data elements not available from NCANDS. To gather additional information about challenges states face in collecting and reporting information on child maltreatment fatalities to NCANDS, including challenges at the local level, and federal assistance to states, we conducted site visits to California, Michigan, and Pennsylvania and met with state officials and officials from selected localities within those states between July and December 2010. Specifically, we met with local officials from Calaveras, Los Angeles, and Sacramento counties in California; Bay, Genessee, Ingham, Lincoln, Oakland, and Wayne counties in Michigan; and Berks, Lehigh, and Philadelphia counties, among others, in Pennsylvania. We selected these states based on recommendations from HHS officials and experts, child population, collection of information on child maltreatment near fatalities, type of child welfare program administration (state-administered and county-administered with state supervision), and geographic diversity. We worked with state officials to select counties that were located in both urban and rural areas to ensure that we captured any related differences in data collection and reporting processes and federal assistance. During these visits, we interviewed state child welfare officials and officials from the department of health or other body coordinating the child death review process, and collected relevant state laws, policies, procedures, and reports. At the local level, we interviewed CPS officials, law enforcement personnel, and medical examiners or coroners in charge of investigating child deaths in each state. Through these interviews, we collected information on state and local processes for collecting and reporting data on child maltreatment fatalities and the associated challenges officials face. We conducted some of these interviews via telephone to limit travel costs. Information we gathered on our site visits represents only the conditions present in the states and local areas at the time of our site visits. We cannot comment on any changes that may have occurred after our fieldwork was completed. Furthermore, our fieldwork focused on in-depth analysis of only a few selected states. On the basis of our site visit information, we cannot generalize our findings beyond the states we visited. For all three objectives, we interviewed HHS officials and other experts on child maltreatment fatalities and near fatalities. We identified child maltreatment researchers through our literature review and through recommendations from stakeholders knowledgeable about child maltreatment fatalities and near fatalities. For this study, we interviewed HHS and other officials knowledgeable about NCANDS, NCCDR, and NIS-4 data. We also interviewed researchers and experts affiliated with the following centers and associations: the American Academy of Pediatrics (AAP), the American Bar Association’s (ABA) Center on Children and the Law, the Child Welfare League of America, the National Coalition to End Child Abuse Deaths, the Interagency Council for Child Abuse and Neglect/National Center on Child Fatality Review, and NCCDR. (The National Coalition to End Child Abuse Deaths includes officials from the Every Child Matters Education Fund, the National Center for Child Death Review, the National District Attorneys Association/National Center for Prosecution of Child Abuse, the National Association of Social Workers, and the National Children’s Alliance.) We conducted this performance audit from April 2010 through July 2011 in accordance with generally accepted government auditing standards. These standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Selected Information on Child Fatalities from Maltreatment Selected NCANDS results on child fatalities from maltreatment reported by HHS for fiscal year 2009 Forty-six percent of fatalities were children younger than 1 year, and 81 percent were 3 years old or younger. Boys had a slightly higher child fatality rate than girls, at 2.36 per 100,000 boys in the population, and girls had a rate of 2.12 per 100,000 girls in the population.  Of all child fatalities, 39 percent were White children, 29 percent were African- American, and 17 percent were Hispanic. Children of American Indian or Alaska Native, Asian, Pacific Islander, or multiple race categories collectively accounted for 3.6 percent, and 11.2 percent were children of unknown race. Thirty-seven percent of child fatalities were caused by multiple forms of maltreatment.  Neglect accounted for about 36 percent of fatalities and physical abuse for 23 percent. Seventy-six percent of child fatalities were caused by one or more parents. Twenty-seven percent of child fatalities were perpetrated by the mother acting alone, and 23 percent were caused by both parents. Foster parents and legal guardians accounted for less than 1 percent of perpetrators (foster parents were reported as the perpetrator in 5 child fatalities from maltreatment). Twelve percent of children who died from maltreatment were from families who had received family preservation services in the previous 5 years. Two percent of children who died from maltreatment had been in foster care and were reunited with their families in the previous 5 years.  HHS’s Child Maltreatment 2009 report did not provide information on these data elements for children who died from maltreatment. Child risk factors include having an intellectual disability, physical disability, learning disability, and visual or hearing impairment. Risk factors associated with the caregiver include alcohol or drug abuse, domestic violence, emotional disturbance, and financial difficulties. Preventive services are provided to parents whose children are at risk of maltreatment and include family support, child day care, education and training, employment, and housing. Following are selected results from our analysis of child maltreatment data in the CDR Reporting System: Manner of death: Homicide was the manner of death on the death certificate for 57 percent of child maltreatment fatality victims reported to NCCDR in calendar year 2009. Cause of death: Injury was the primary cause of death for 79 percent of children who died from maltreatment, and just over half of those children were killed with a weapon. Child welfare history: Of the 417 reported child maltreatment fatality victims:  Thirty-one percent had a documented history of maltreatment.  Thirteen percent had an open CPS case prior to the incident causing the child’s death.  Fourteen percent of children who died had at least one CPS referral prior to their deaths.  Eight percent were placed in foster care prior to their deaths. Thirty-two states also collected information on child maltreatment fatalities that were not reported to NCANDS in fiscal year 2009, according to our survey of state child welfare officials. For example, 27 states reported that they collected data on the child’s family characteristics that they did not report to NCANDS in fiscal year 2009. (See table 4.) Data that states collect but do not report to NCANDS could represent additional, more detailed information on children who die from maltreatment (such as information on siblings’ prior contact with the child welfare system) or data that states collect but cannot report for technical reasons. For example, in explaining this condition, two states noted that much of the data was captured in narrative or case logs—not in reportable data fields—while another state noted that it collects additional information on child maltreatment fatalities reported by local county child welfare agencies. Brett Fallavollita, Assistant Director, and Deborah A. Signer, Analyst-in- Charge, managed this assignment and made significant contributions to all aspects of this report. Katherine Berman, Amanda D. Cherrin, Alison Gerry Grantham, and Marcella Wagner, Analysts, also made important contributions to this report. Katherine van Gelder and James E. Bennett provided writing and graphics assistance. Hiwotte Amare, Lorraine R. Ettaro, Stuart M. Kaufman, and Monique B. Williams provided data analysis and methodological assistance; and Julian P. Klazkin provided legal assistance. Almeta J. Spencer provided administrative support.
Children's deaths from maltreatment are especially distressing because they involve a failure on the part of adults who were responsible for protecting them. Questions have been raised as to whether the federal National Child Abuse and Neglect Data System (NCANDS), which is based on voluntary state reports to the Department of Health and Human Services (HHS), fully captures the number or circumstances of child fatalities from maltreatment. GAO was asked to examine (1) the extent to which HHS collects and reports comprehensive information on child fatalities from maltreatment, (2) the challenges states face in collecting and reporting this information to HHS, and (3) the assistance HHS provides to states in collecting and reporting data on child maltreatment fatalities. GAO analyzed 2009 NCANDS data--the latest data available--conducted a nationwide Web-based survey of state child welfare administrators, visited three states, interviewed HHS and other officials, and reviewed research and relevant federal laws and regulations. More children have likely died from maltreatment than are counted in NCANDS, and HHS does not take full advantage of available information on the circumstances surrounding child maltreatment deaths. NCANDS estimated that 1,770 children in the United States died from maltreatment in fiscal year 2009. According to GAO's survey, nearly half of states included data only from child welfare agencies in reporting child maltreatment fatalities to NCANDS, yet not all children who die from maltreatment have had contact with these agencies, possibly leading to incomplete counts. HHS also collects but does not report some information on the circumstances surrounding child maltreatment fatalities that could be useful for prevention, such as perpetrators' previous maltreatment of children. The National Center for Child Death Review (NCCDR), a nongovernmental organization funded by HHS, collects more detailed data on circumstances from 39 states, but these data on child maltreatment deaths have not yet been synthesized or published. States face numerous challenges in collecting child maltreatment fatality data and reporting to NCANDS. At the local level, lack of evidence and inconsistent interpretations of maltreatment challenge investigators--such as law enforcement, medical examiners, and child welfare officials--in determining whether a child's death was caused by maltreatment. Without medical evidence, it can be difficult to determine that a child's death was caused by abuse or neglect, such as in cases of shaken baby syndrome, when external injuries may not be readily visible. At the state level, limited coordination among jurisdictions and state agencies, in part due to confidentiality or privacy constraints, poses challenges for reporting data to NCANDS. HHS provides assistance to help states report child maltreatment fatalities, although states would like additional help. For example, HHS hosts an annual NCANDS technical assistance conference, provides individual state assistance, and, through NCCDR, has developed resources to help states collect information on child deaths. However, there has been limited collaboration between HHS and NCCDR on child maltreatment fatality information or prevention strategies to date. State officials indicated a need for additional information on how to coordinate across state agencies to collect more complete information on child maltreatment fatalities. States are also increasingly interested in collecting and using information on near fatalities from maltreatment. GAO recommends that the Secretary of HHS take steps to further strengthen data quality, expand available information on child fatalities, improve information sharing, and estimate the costs and benefits of collecting national data on near fatalities. In its comments, HHS agreed with GAO's findings and recommendations and provided technical comments, which GAO incorporated as appropriate.
SBA coordinates and oversees the efforts of the 11 agencies currently participating in the SBIR program. SBA coordinates the agencies’ schedules for issuing solicitations—announcements of opportunities for small businesses to apply for awards—and provides access to these solicitations through its Web site. As part of its oversight effort, SBA collects SBIR data from the participating agencies, aggregates the data, and uses the data to, among other things, monitor the program and report to Congress. SBA also provides guidance to participating agencies on the general conduct and operation of the program, which it periodically updates, for example, in response to changes in the program’s authorizing legislation. Under the legislation and SBA’s guidance, agencies have considerable flexibility to design their programs. For example, each agency determines, in consultation with SBA, such items as the number of solicitations to be issued during a fiscal year and the dates applications are due. Agencies also have discretion to determine what type of research to include in their solicitations, how to review applications for technical and scientific merit, which applications to fund, and the size of the award, among other things. The Small Business Innovation Development Act of 1982 provided for a competitive three-phased SBIR program. In phase I, participating agencies award up to $150,000 for a period of about 6 to 9 months to small businesses to conduct experimental or theoretical R&D. Small businesses whose phase I projects demonstrate scientific and technical merit, in addition to commercial potential, may compete for phase II awards of up to $1 million to continue the R&D for an additional period, normally not to exceed 2 years. Phase I and II award funds may be used for costs related to conducting the research, such as salaries, fringe benefits, equipment, and consulting services, as well as for profits and fees. To be eligible for a phase I or II SBIR award, a business must have 500 or fewer employees, be organized for profit with a place of business in the United States, and operate primarily in the United States or make a significant contribution to the U.S. economy. Generally, a business must also be at least 51 percent owned and controlled by one or more individuals who are U.S. citizens or permanent resident aliens. These eligibility requirements apply at the time that a phase I or II award is made. During phase III, businesses must secure non-SBIR funding to develop the commercial potential of the innovative technologies resulting from their SBIR projects; such funding may come from the private sector, federal agencies, or other sources. As the program has been reauthorized over the years, legislation has established a number of requirements related to the program’s purposes. For example, the Small Business Research and Development Enhancement Act of 1992 directed SBA to make more information available about the SBIR program, particularly about participation by small businesses owned by disadvantaged individuals and women, and required that agencies increase their outreach to such businesses. In addition, the Small Business Reauthorization Act of 2000 directed that applicants for phase II SBIR awards be required to submit commercialization plans, and it mandated that SBA develop, maintain, and make available to the public a database that contained SBIR award data. In addition, the act required SBA to develop and maintain, by June 2001, a restricted government-use database that would contain award- related data from the public database, as well as additional confidential data that would be accessible only to government agencies and other authorized users. The act stated that this database would be used exclusively for program evaluation—which, as we have noted in past work, involves the systematic collection and analysis of accurate, comparable, and complete data on program results. The act required the government-use database to contain, among other things, data that applicants for phase II awards would be required to supply on the commercialization success of any prior phase II awards, such as data on sales of or additional investment in the technologies funded under the awards. The act further specified that the government-use database would contain annual updates to these data, which phase II award recipients would be requested to voluntarily provide for 5 years after the period covered by the award. To accomplish this mandate, SBA envisioned expanding an electronic database, known as Tech-Net, which it had developed in the late 1990s, into two sections: a public-use portion and a government-use portion containing commercialization data. The public-use portion of the database has been available since 2000, according to SBA, and it contains such award-related data as the phase of the award, amount of the award, name and location of the business receiving the award, an abstract of the work to be conducted under the award, and whether the business is categorized as owned by disadvantaged individuals or women. In October 2006, however, we reported that some SBIR agencies did not consistently provide or correctly format the awards-related data for several fields in the public-use portion of the database. For example, two of the eight agencies we reviewed had not consistently provided data on whether the businesses receiving the awards were categorized as owned by disadvantaged individuals or women. At that time, we also reported that SBA had not implemented the government-use portion of the database, primarily, according to SBA officials, because of increased security requirements for the database, agency management changes, and budgetary constraints. Additionally, we reported that while five of the agencies we reviewed had systematically collected commercialization data, their data collection efforts differed in ways that made it challenging to evaluate the program across agencies. In August 2009, we testified before Congress that SBA said the database would no longer accept incorrectly formatted awards-related data from participating agencies. A committee of the National Academy of Sciences’ National Research Council has conducted a series of assessments of the SBIR program, both within and across agencies, as part of a legislatively mandated study. The results were summarized in a single report, in which the committee stated that SBIR is making significant progress in achieving congressional goals. The study concluded that the SBIR program is “sound in concept and effective in practice.” The study also recommended changes that could make the program more effective. Among other things, the study recommended that SBA and participating agencies improve the collection of data that track participation in the SBIR program by businesses owned by disadvantaged individuals and women, develop targeted outreach to such businesses that is based on an analysis of factors that affect their participation, and improve documentation of commercialization success. The National Research Council is now undertaking another round of assessments to provide a second snapshot of the program’s progress in achieving its legislative purposes. For fiscal years 2008 through 2011, the five participating agencies we reviewed addressed the SBIR purposes of using small business to meet federal R&D needs and stimulating technological innovation through their solicitations. Agencies also used solicitations, as well as technical assistance or matching funds programs, to address the SBIR purpose of increasing commercialization of innovations derived from federal R&D efforts. To address the remaining program purpose—encouraging participation in technological innovation by small businesses owned by disadvantaged individuals and women—agencies relied mainly on outreach activities aimed at a broader audience. All of the participating agencies that we reviewed designed the SBIR solicitations that they issued for fiscal years 2008 through 2011 to meet federal R&D or mission needs and stimulate technological innovation. All of these agencies selected research topics for their solicitations that were designed to meet their respective R&D or mission needs and specified that applications would be evaluated on the basis of responsiveness to those topics. The agencies that purchase SBIR-funded technologies for their own use—DOD, DOE, and NASA—tended to select solicitation topics that met specific agency needs for R&D. For example, in fiscal year 2011, DOD solicited applications to develop a fuel cell system capable of converting ethanol into electricity in an efficient, small, lightweight, portable power system. According to the solicitation, such advanced fuel cell systems could provide soldiers power to complement batteries and to charge rechargeable batteries, reducing the number of batteries required for extended time in the field. In contrast, NIH and NSF, which generally do not purchase SBIR-funded technologies, tended to issue solicitations for a broader spectrum of R&D to support their missions of advancing biomedical and other scientific and engineering disciplines. Among the agencies we reviewed, NIH and its components gave applicants the most leeway in addressing agency needs: rather than limiting applications to specific research topics identified in solicitations, NIH and its components usually listed suggested topics and encouraged applicants to propose innovative projects that fit the agency’s mission. Concerning innovation, each of the agencies included instructions in its SBIR solicitations about the type of information applicants had to provide about the innovativeness of the proposed work. For example, NASA informed phase I and II applicants that a competitive application would describe the proposed innovation relative to state-of-the-art knowledge in the field, among other things. In addition, these agencies explained to applicants how reviewers would consider evidence of the innovativeness of the applicants’ proposed research approaches. For example, in its fiscal year 2010 solicitation, NSF stated that applications would be evaluated, in part, on the basis of whether they reflected state-of-the-art knowledge in the major research activities proposed and whether the work was likely to advance state-of-the-art knowledge. The participating agencies we reviewed addressed the SBIR purpose of increasing commercialization of innovations through solicitations, as well as through technical assistance or matching funds programs. Solicitations. Of the five agencies we reviewed, all but NIH required in their solicitations for fiscal years 2008 through 2011 that applicants for phase I awards submit a commercialization strategy demonstrating that the applicants had taken steps such as identifying a market for their SBIR technologies, planning to secure financing, and estimating expected future sales. For phase II awards, all of the agencies we reviewed required that applicants submit a commercialization plan. In general, the solicitations we reviewed required that phase II commercialization plans discuss the potential market and competitors; the qualifications of key management and technical personnel; as well as financing, marketing, and manufacturing plans, among other things. The agencies we reviewed differed in their stated processes for evaluating the commercial potential of applications. For example, DOD guidance to applicants outlined a systematic process for how the agency would consider commercialization potential when evaluating applications submitted by small businesses that had received multiple prior awards. DOD indicated that, under this process, it would assign a commercialization achievement score to applicants that had completed the work for four or more phase II awards from any agency; this score would reflect how the applicants’ commercialization experience compared with historical averages. Applicants whose scores fell within the lowest 20 percent would not be allowed to receive more than half the maximum number of points possible for commercialization potential, which was to be assessed on the basis of several factors, including the commercialization strategy or plan. DOD guidance stated that businesses with fewer than four completed phase II awards would not be affected by the absence of a commercialization achievement score. Although the other four agencies we reviewed did not outline as systematic a process for evaluating past commercialization success as a gauge of commercialization potential, they still indicated that commercialization potential would be taken into account in reviewing applications. For example, DOE’s solicitation instructions encouraged phase I applicants to seek firm commitments for private-sector or non-SBIR federal funding prior to applying for a phase II award. The instructions further stated that phase II applicants that obtained such commitments were more likely to receive full credit for commercialization planning during the evaluation of their applications. In the case of NSF, solicitation instructions stated that proposals are usually reviewed by 3 to 10 outside experts in fields related to the proposal; according to NSF officials, these reviewers have business experience. NSF’s solicitation instructions further stated that the agency would not review applications that lacked sufficient information on commercial potential. In 2010, two agencies we reviewed also issued SBIR solicitations under new programs that were explicitly oriented toward increasing commercialization. Specifically, in July 2010, DOE launched a program under which it solicited applications for phase III of SBIR, the commercialization phase. DOE documents indicated that the agency would make available approximately $30 million, including funding from the American Recovery and Reinvestment Act (Recovery Act), for phase III awards, which are intended to allow businesses to pursue commercial applications of work performed under phase I and II awards. In addition, NIH’s National Cancer Institute began a program under which it solicited phase I applications to continue development of technologies that have originated in its laboratories, with the goal of advancing these technologies toward commercial products. SBA has designated the use of the SBIR program to encourage commercialization of agencies’ internal research as a best practice on its SBIR Web site. Technical assistance. All five agencies included in our review provided technical assistance to help award recipients build their capacity to commercialize their technologies. To provide the assistance, the agencies contracted with vendors and consultants who have experience in bringing technologies to market. With the exception of NASA, the agencies supported the technical assistance at least in part through the use of SBIR funds. In fiscal years 2008 to 2010, DOD, DOE, NIH, and NSF spent SBIR funds on technical assistance for phase I award recipients. Some of the assistance was in the form of interactive training Webinars or online tools directed toward a broad spectrum of SBIR applicants and award recipients. For example, the Navy offered phase I award recipients the use of a software tool, known as WebTRIMS, that helps identify, quantify, and track risks associated with SBIR technology development and covers topics such as contracting strategies, business and transition planning, and manufacturing readiness. Other phase I assistance was more customized. For example, DOE offered phase I award recipients customized technical assistance designed to help them develop a commercialization plan complete with an implementation schedule and suggestions for product design. Similarly, on a first-come, first-served basis, NIH offered phase I award recipients assessments of their SBIR- funded technologies’ likely niche in the existing commercial market, which could help recipients develop commercialization plans for phase II applications. Additionally, NSF offered phase I award recipients personalized mentoring and coaching sessions with an advisor. According to NSF officials, 92 percent of phase I recipients chose to participate in the technical assistance program in 2010. NASA did not provide technical assistance for phase I award recipients; NASA officials told us they believed technical assistance would have the most utility for phase II NASA award recipients. During at least a portion of the period we reviewed, all five agencies offered individualized technical assistance for phase II award recipients, although DOE curtailed such assistance in 2010, and NASA discontinued its assistance in 2008. Award recipients were selected for assistance on the basis of factors such as recommendations from SBIR program staff and the award recipients’ potential for rapidly moving their technologies to phase III. The assistance consisted of in-depth training and one-on-one assistance from advisors and industry experts. For example, as part of its Commercialization Pilot Program, the Army assisted selected phase II SBIR award recipients in assessing commercialization potential, developing business plans, and matching their technologies with potential government and industry customers. At DOE, staff could nominate phase II award recipients for assistance in preparing to negotiate business deals, for example, including joint ventures and licensing agreements for use of their technologies. DOE officials told us that in 2010 the agency curtailed its use of SBIR funds for phase II technical assistance, spending such funds on assistance only for award recipients that had specifically budgeted for it in their applications. In 2007 and 2008, NASA partnered with the Navy to pilot a technical assistance program for NASA phase II recipients. The program was designed to help SBIR businesses develop a plan for transitioning to phase III, among other things. In 2007, 17 phase II companies with 19 SBIR projects participated in the program, and in the following year, 19 phase II companies with 20 SBIR projects participated. The program was not renewed for 2009; NASA officials told us that they believed the program was generally successful, but that they preferred to use SBIR funds to make larger awards. NIH offered selected current or past phase II award recipients the opportunity to work one-on-one with an advisor over a 9-month period to develop business plans to commercialize their technologies, as well as to prepare materials to help attract potential investors or partners. Since 2004, almost 700 award recipients have received the assistance, including the 80 award recipients currently participating, according to NIH officials. Finally, NSF offered customized assistance to phase II award recipients through its Innovation Accelerator Initiative. According to NSF officials, through this initiative, award recipients received help in connecting with potential investors and negotiating company acquisitions and mergers. NSF officials told us that, in 2010, approximately 33 percent of NSF’s phase II recipients received this assistance. In some cases, agencies that we reviewed used non-SBIR funds to broaden the scope of the technical assistance they provided to help award recipients commercialize their technologies. For example, DOD used non-SBIR funds to host its annual Beyond Phase II Conference and Technology Showcase, a 3-day event that features matchmaking sessions with SBIR award recipients and prime contractors. Similarly, the Navy used non-SBIR funds to maintain databases with advanced searching capability to help award recipients identify potential business partners. The Navy also used non-SBIR funds for its Transition Assistance Program, which provides individualized help with commercialization planning, culminating in a conference designed to facilitate interaction with potential business partners. Moreover, in 2011, the National Cancer Institute launched its Regulatory Assistance Program using non-SBIR funds. According to agency officials and information from the agency’s Web site, this program provides SBIR award recipients time with consultants experienced in various regulatory requirements—such as those for anticancer therapies, imaging technologies, and medical devices—to prepare strategies for obtaining regulatory approvals required before the technologies can be commercialized. The National Cancer Institute also used non-SBIR funds to support its Investor Forum, which provides competitively selected SBIR award recipients an opportunity to showcase their technologies and enter into discussions with the biotech investment community. In 2010, 14 award recipients that were selected on the basis of strength of research, impact on cancer, product development, and market potential participated in the forum along with more than 175 potential investors, according to the agency’s Web site. Matching funds programs. Through matching funds programs, agencies provide additional SBIR funds to award recipients that obtain monetary commitments above certain thresholds from outside investors. SBA has designated matching funds programs as a best practice on its SBIR Web site, and all of the agencies we reviewed except DOE have established such programs. For example, for award recipients that obtain a minimum of $100,000 from an outside investor, NSF will match up to 50 percent of the outside investment for a maximum of $500,000 in NSF matching funds. NASA and NIH officials said that matching funds programs encourage outside investment during the early stages of R&D—a time when many investors are reluctant to invest. In particular, officials at the National Cancer Institute said that matching funds can help attract outside investment because they can be used as leverage to increase investors’ potential returns. DOD and NSF offer matching funds to award recipients at the end of phase I and during phase II, while NASA and the National Cancer Institute offer matching funds during phase II. DOE has not established a matching funds program for its SBIR program. DOE officials told us, however, that they are exploring whether to do so and have held discussions with other SBIR participating agencies about their matching funds programs. Officials at DOD, NASA, and NIH said they have not collected data to compare the commercialization success of recipients that received matching funds with the success of those that did not. NSF conducted a study to assess the effect of the $18 million in matching funds it invested in fiscal year 2006 for 48 phase II award recipients that had raised a total of $58 million from outside investors. According to NSF officials, results of this study showed that, in the 5 years following the start of these phase II projects, 70 percent of recipients that had received matching funds achieved commercial success compared with a 30 percent success rate for recipients that had not received such funds. SBA’s guidance states that small businesses owned by disadvantaged individuals and women must compete for SBIR awards on the same basis as all other small businesses. However, to meet requirements for greater outreach to small businesses owned by disadvantaged individuals and women, SBA has encouraged participating SBIR agencies to reach out to such businesses and to develop methods that encourage their participation. SBA has also raised the topic of outreach during recent quarterly meetings of agency SBIR program managers. Officials at all of the agencies we reviewed told us they generally reach out to such businesses through activities directed toward a broader audience, such as by attending SBIR national conferences and industry-sponsored events and by sharing information via Web sites or e-mail lists. Agency officials also noted that they try to accommodate requests for speakers at events sponsored by, or likely to be attended by, small businesses owned by disadvantaged individuals and women—for example, events sponsored by trade organizations for minority- or women-owned businesses. However, officials from some trade organizations for businesses owned by disadvantaged individuals and women told us that the outreach of agencies we reviewed was often ineffective in educating the organizations’ members about the SBIR program. Of the agencies we reviewed, NIH and NSF have made specific efforts, including the following, to improve their outreach:  For fiscal years 2010 and 2011, NIH developed a goal to increase awareness of its SBIR program among businesses owned by disadvantaged individuals and women, and it outlined specific activities aimed at reaching this goal.  Both NIH and NSF offered various fellowships for postdoctoral research conducted by disadvantaged individuals and women; these fellowships were available to support SBIR projects, as well as other research. In 2010, NSF assigned a full-time staff member to help it develop a plan to increase participation in SBIR by businesses owned by disadvantaged individuals and women in response to a recommendation from its SBIR advisory committee.  Through a review of academic literature, as well as informal polling of NSF applicants and award recipients, NSF has identified barriers to SBIR participation by small businesses owned by disadvantaged individuals and women, NSF officials told us. These barriers include disparities in the owners’ levels of education and access to capital compared with those of other entrepreneurs. To address identified barriers, NSF is, among other things, establishing partnerships with industry and academia to expose African American, Latino, and other college students to entrepreneurship in scientific and technical fields, according to NSF officials. Evaluation of the effectiveness of agencies’ outreach efforts is hindered by a lack of accurate and complete data. Although SBA collects data on the number and dollar value of awards to small businesses owned by disadvantaged individuals and women, SBA officials told us that they cannot accurately tabulate data on such awards, particularly awards to women-owned businesses, because of inconsistencies in the data on business ownership. According to the officials, SBA has taken steps to correct the inconsistencies for data submitted after 2006 but has not done so for earlier years. Moreover, SBA does not collect data on the number of applications submitted by businesses owned by disadvantaged individuals and women. As a result, SBA’s data do not allow for an examination of trends in the submission of applications from such businesses, analysis of the percentage of applications from these businesses that lead to awards, or correlation of these trends and percentages with outreach efforts. SBA officials told us in March 2011 that they were considering whether their database should include information on the numbers of applications submitted by these businesses. SBA has not yet developed the government-use portion of its database for collecting comparable commercialization data on SBIR technologies, but it is taking steps to do so. In the interim, agencies have, for their own purposes, independently gathered commercialization data that are not comparable; the accuracy of these data is largely unknown. Implementing the government-use portion of the database should improve the comparability of the data. However, programwide evaluation of progress in increasing commercialization may continue to be impaired by long- standing challenges. As of June 2011, SBA had not met the legislative mandate to develop and implement, by June 2001, a government-use database that can provide data on commercialization for evaluating the SBIR program. However, the agency’s efforts to develop such a database recently gained additional prominence and resources. Specifically, SBA linked development of the government-use portion of its database to one of the agency’s high-priority performance goals for fiscal years 2011 and 2012. Additionally, in September 2010, SBA allocated $1.4 million in Recovery Act funds to hire a new contractor to develop the government-use portion’s capacity to accept commercialization data submitted by participating SBIR agencies and award recipients, as well as to make other improvements to the database. For example, SBA said that it has been working with the contractor to consolidate data on previous awards. SBA officials said that past award recipients have been assigned unique identifiers that will be used to track awards issued to those recipients over the lifetime of the SBIR program; unique identifiers are also to be assigned to small businesses newly entering the program. In the future, SBA intends for the unique identifiers to allow agencies to validate business information by comparing it against information in other federal databases such as the Central Contractor Registration database, which contains information on businesses that want to contract with the federal government. SBA officials told us that they expect to implement the government-use portion of SBA’s database by August 2011 and to provide for its basic maintenance and support despite reductions in the agency’s overall budget. The government-use portion is intended to allow both participating agencies and award recipients to enter commercialization data in a comparable format to assist in program evaluation. SBA officials told us that they have worked with participating agencies to develop common metrics for commercialization data, as well as a standardized data collection instrument that will accommodate the various types of SBIR technologies the agencies fund to meet their different missions. These metrics, which will correspond to fields in the database, include the following: indication of whether an award resulted in a commercialized technology and whether other SBIR awards contributed to commercialization of the technology;  estimated investment (other than SBIR funding);  any patents applied for or received related to the award; and  any initial public offering, merger, or sale of the business that resulted, at least in part, from the award. SBA officials told us in May 2011 that they plan to implement the metrics and data collection instrument in August 2011. SBA is requesting that participating agencies voluntarily begin entering historical commercialization data into the government-use database before August 2011. To facilitate this process, SBA is working with its contractor to ensure that historical agency data can be matched to fields in the new database. Nevertheless, officials from SBA and participating SBIR agencies said that some agencies may not enter historical data or may be delayed in doing so because they either did not collect such data or do not have the data in electronic form. For example, NASA officials stated that much of their commercialization data are stored in paper format and expressed doubt that the agency would be able to convert the data into the required format for entering by SBA’s August deadline. SBA officials also told us that, after the government-use portion of the database is available, some agencies may instruct applicants and award recipients to submit their commercialization data directly into the database. Other agencies, such as DOD, may continue to require applicants and recipients to submit commercialization data directly to the agencies, which would then upload the data into the database. As of May 2011, SBA officials were unsure which approach agencies would take, noting that agencies may wait to see how the database works before making a decision. In the absence of the government-use portion of SBA’s database, the five participating SBIR agencies we reviewed have independently collected commercialization data that are not comparable. The agencies collected these data using various methods for their own purposes, as summarized in table 1. In conducting their data collection efforts, agencies differed in the extent to which they asked award recipients to do the following, among other things: Identify the type of customer and the amount of sales or further investment for SBIR-funded technologies. For example, most agencies asked award recipients to report federal and nonfederal sales separately, but NIH and NSF asked award recipients to report combined sales.  Account for indirect sales and nonfinancial indicators of commercialization. NASA, NIH, and NSF asked award recipients to indicate whether an SBIR-funded technology had resulted in licensing agreements with other businesses to sell the technology, while DOD and DOE did not ask that question. NASA further asked award recipients to estimate the financial value of such agreements, while the other agencies did not. Similarly NASA, NIH, and NSF asked award recipients to indicate whether specific SBIR-funded technologies had resulted in patents, while DOD and DOE asked award recipients to report the total number of patents resulting from all their SBIR awards.  Quantify the dollar values of cumulative sales. While most agencies asked award recipients to report a specific dollar amount in cumulative sales resulting from their SBIR-funded technologies over a period of time, NIH asked award recipients to report such sales by choosing among ranges, beginning with “$50,000 or less” and extending to “$50,000,000 or more.” Because NIH has reported cumulative sales in ranges rather than specific dollar amounts, comparing its results with those reported by other agencies is difficult. While each agency’s data collection efforts resulted in, among other information, estimates of total or average sales of SBIR technologies, differences in the agencies’ data collection efforts make it difficult to compare results across agencies. The following are examples of commercialization data reported by agencies:  DOD estimated that commercialization of SBIR technologies that it funded generated federal and nonfederal sales and non-SBIR funding of $22 billion on a program investment of $11 billion from 2000 through March 2010.  DOE estimated that, from 1986 through 2007, SBIR technologies developed by recipients of phase II awards resulted in a total of $2.4 billion in federal and nonfederal sales and $1.6 billion in non-SBIR investment. On average, award recipients reported receiving more than $3 million in sales related to SBIR-funded technologies. During the same period, DOE reported that it had invested $1.6 billion in phase I and II SBIR awards.  NASA estimated that, as of 2002, SBIR technologies developed by award recipients that received a phase II award from 1983 through 1996 had generated approximately $2.8 billion in federal and nonfederal sales and non-SBIR funding compared with $1.1 billion in SBIR investment from NASA. In NIH’s 2002 survey, which covered 1992 through 2001, 27 percent of respondents reported an estimated total of $821 million in sales of SBIR technologies; the other respondents did not report any sales. NIH estimated that it invested $2.2 billion in phase I and phase II awards from 1992 through 2001. For the 2008 survey, which covered 2002 through 2006, 33 percent of respondents reported an estimated total of $396 million in federal and nonfederal sales of SBIR technologies. NIH estimated that it invested $2.7 billion in phase I and phase II awards from 2002 through 2006. NIH was the only agency we reviewed that reported sales lower than its SBIR investment for the periods it examined. According to NIH officials, many of the technologies that the agency supports through its SBIR program, such as drugs and medical devices, take longer to commercialize than those funded by other agencies because of the need for extensive clinical testing and regulatory approval.  NSF officials estimated that recipients marking the eighth anniversary of the receipt of their awards from July 2005 through May 2010 had realized a total of $1.05 billion in commercial revenue. NSF estimated that it invested $628 million in SBIR awards during roughly the same period. Further, with the exception of DOD, agencies we reviewed generally did not take steps to verify commercialization data that they received from award recipients, so the accuracy of the data is largely unknown. As officials from some of the agencies in our review noted, award recipients may have an incentive to overstate their commercialization success in the hope of improving their prospects of receiving future SBIR awards. While SBA has worked with SBIR agencies to identify best practices in other areas of SBIR program management, it has not identified best practices for agencies to use in verifying the accuracy of commercialization data. Without consistent practices for verifying the accuracy of these data, the usefulness of the government-use portion of SBA’s database as a tool for evaluating the SBIR program’s success in increasing commercialization may be limited. To verify the accuracy of award recipients’ commercialization data, DOD performs an annual review of all projects in its Company Commercialization Database, which contains the commercialization data it gathers from award recipients. This review includes checks to ensure that prior award recipients applying for new awards are not reporting the same project results more than once, substituting the results of one project for that of another, or incorrectly reporting sales to third parties. According to DOD officials, after its 2010 review, the agency sent approximately 300 e-mail queries to applicants whose reported commercialization data were identified as having potential problems. The officials said that applicants that do not respond to such queries are blocked from submitting further applications until concerns related to their commercialization reports are addressed. Even with these verification activities, however, Army officials expressed concern to us about the accuracy of the applicants’ self-reported commercialization data; these officials stated their preference for using data from the Federal Procurement Data System, which contains government information on federal contracts, including sales. Moreover, a Navy official acknowledged the possibility that additional verification activities, such as selective spot visits to SBIR award recipients, could further deter recipients from misrepresenting their commercialization success, although he noted that such activities would compete with other administrative priorities. Similarly, officials from DOE and NIH stated that additional verification activities would be useful but also said that they needed to devote program administration resources to higher priority activities, such as preparing solicitations and supporting review panels for applications. SBA’s implementation of the government-use portion of its database should improve the comparability of commercialization data available for programwide evaluation. Nevertheless, long-standing challenges may continue to impair programwide evaluation of progress in increasing commercialization of SBIR-funded technologies. As we reported in October 2006, notable among these challenges is that prior award recipients that are no longer participating in the SBIR program are not required to provide updated commercialization data and may prefer not to do so. For example, DOD indicated in written comments to us that, from 2008 to 2010, 46 percent of nonparticipating prior phase II award recipients did not provide updates despite DOD’s request that they update commercialization data annually after their awards ended. Similarly, in a report on its 2002 survey, NASA observed that many recipients of multiple awards elected not to respond to its survey despite “extensive telephone follow-up” and that many recipients that ultimately responded “would likely have preferred not to.” Some award recipients may be reluctant to provide commercialization data because the data are business-sensitive. SBA officials told us that mechanisms to require or encourage nonparticipating recipients to report their data need to be explored. A NASA official told us that effective incentives to encourage wider voluntary reporting might include publicizing commercial success or giving monetary prizes for success. The difficulties agencies face in persuading prior award recipients to volunteer commercialization information can be compounded by challenges in maintaining contact with them. Specifically, prior award recipients can change names or personnel, go out of business, or be sold during the 10 or more years that it can take for an SBIR-funded technology to reach the marketplace. In NIH’s 2002 survey of award recipients, for example, the portion of the sample that was “unusable”—a group that consisted primarily of recipients that no longer existed or could not be found—increased from 2 percent in the first year after the end of the award to 52 percent in the tenth year. Programwide evaluation—particularly efforts to compare commercialization success across agencies—can also be complicated by differences in the time required to commercialize various types of SBIR- funded technologies. Comparing agencies’ commercialization results at a given point in time may not present a true picture of each agency’s success because some agencies fund technologies that are relatively close to being market-ready while others fund technologies that need more extensive development or regulatory approval. Furthermore, as we have previously reported, the SBIR program’s other goals remain important, and comparisons that focus on commercialization may not adequately take into account progress toward these goals. For example, one agency official told us that some SBIR-funded technologies, such as those related to national security, may never have great commercial potential but are important to the agency’s mission. DOD, DOE, NASA, NIH, and NSF have designed their SBIR solicitations to address the program’s purposes of using small businesses to meet federal R&D needs, stimulating technological innovation, and increasing commercialization of innovations derived from federal R&D efforts, and they have further addressed commercialization by providing technical assistance or matching funds to award recipients. These agencies have also conducted outreach and other activities to address the SBIR purpose of encouraging participation in technological innovation by small businesses owned by disadvantaged individuals and women. However, evaluation of progress in achieving the program’s purposes is impeded by a lack of accurate, comparable, and complete data on program results. For example, it is difficult to evaluate the program’s effectiveness in encouraging small businesses owned by disadvantaged individuals or women to participate in technological innovation because SBA does not collect data on the number of applications submitted by such businesses. It is also difficult to evaluate the program’s effectiveness in increasing commercialization of SBIR-funded technologies because, although agencies participating in the program have gathered commercialization data for their own purposes, comparable data on commercialization are not available across agencies. SBA’s planned implementation of a government-use portion of its database should go some way toward improving the comparability of the commercialization data as they are systematically collected using common metrics. However, the commercialization data that the database is intended to contain are largely self-reported by award recipients that may have an incentive to overstate their commercialization success. DOD has adopted practices for verifying the accuracy of commercialization data it collects from prior award recipients, but most of the participating agencies we reviewed did not verify the accuracy of commercialization data from their prior award recipients, and SBA has not identified best practices for participating agencies to use in doing so. As long as participating agencies do not consistently verify the accuracy of commercialization data, the usefulness of the government-use portion of SBA’s database as a tool for evaluating the SBIR program’s success in increasing commercialization may be limited. To build upon efforts to implement a government-use database for program evaluation, we recommend that the Administrator of the Small Business Administration work with participating SBIR agencies to take the following two actions: collect data on the number of applications submitted by small businesses owned by disadvantaged individuals and women, and identify best practices for verifying the accuracy of data related to progress in increasing commercialization. We provided a draft of this report to SBA, the Departments of Defense and Energy, the National Aeronautics and Space Administration, the National Institutes of Health, and the National Science Foundation for review and comment. SBA generally agreed with our findings as well as our recommendations, which it offered an action plan to address. Specifically, with respect to our first recommendation, SBA stated that, beginning in fiscal year 2012, it plans to use its database to collect information from agencies about applicants that did not receive awards— information that could include whether the applicants were small businesses owned by disadvantaged individuals or women. Further, SBA indicated that it plans to hold a workshop in fall 2011 for participating SBIR agencies to share best practices for reaching out to small businesses owned by disadvantaged individuals. According to SBA, the workshop should result in a commitment from agencies to develop baselines for numbers of applications from such businesses. Regarding our second recommendation, SBA indicated that it will seek to identify best practices and methods for verifying the accuracy of commercialization data and will work with agencies toward implementation of those practices and methods. SBA also noted that its effort to collect commercialization data is intended to establish a baseline, against which SBA can review progress in increasing commercialization. SBA’s letter conveying its comments is contained in appendix II. Among the SBIR participating agencies that we reviewed, DOE and NSF concurred with our recommendations and provided general comments, which are included in appendixes III and IV, respectively. Both DOE and NSF also made technical comments, which we have incorporated into our report as appropriate. In its general comments, DOE stated that it collects information on the number of applications submitted by small businesses owned by disadvantaged individuals and women and is willing to report the data to SBA. DOE further stated that it does not verify commercialization data because of resource limitations—not a belief that verification is of limited value—and it expressed an interest in learning about best practices for verification of these data. In addition, DOE commented that, until universal metrics are identified for measuring the success of SBIR programs across agencies, the compatibility of available data among agencies will remain a secondary concern. NSF stated that it concurs with the underlying goals of our recommendations. Moreover, NSF affirmed its commitment to implementation of a government-use database for program evaluation, collection of data on participation in small business innovation, and identification of best practices for verification of commercialization data. The remaining agencies—DOD, HHS, and NASA—neither agreed nor disagreed with our recommendations but provided technical comments, which we have incorporated into our report as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the appropriate congressional committees, Administrators of SBA and NASA, Secretaries of Defense and Energy, Directors of NIH and NSF, and other interested parties. In addition, this report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-3841 or ruscof@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix V. In conducting this study, we reviewed Small Business Innovation Research (SBIR) program-related activities of the Small Business Administration (SBA) and 5 of the 11 SBIR participating agencies—the Department of Defense (DOD), Department of Energy, National Aeronautics and Space Administration, Department of Health and Human Services’ National Institutes of Health (NIH), and National Science Foundation (NSF). For the two agencies with the largest SBIR budgets— DOD and NIH—we reviewed program activities conducted by the three participating subcomponent agencies with the largest SBIR budgets because some key activities are carried out at that level. Specifically, for DOD, we examined the SBIR programs of the Army, Air Force, and Navy, and for NIH, we examined the programs of the National Institute of Allergy and Infectious Diseases; the National Cancer Institute; and the National Heart, Lung, and Blood Institute. The five participating agencies we reviewed accounted for about 96 percent of the total dollars awarded by the program in fiscal year 2009. We reviewed applicable laws and regulations and literature on the SBIR program, including our prior reports and assessments by a committee of the National Academy of Sciences’ National Research Council. To obtain further context for our review, we attended two national conferences and a National Research Council workshop on the SBIR program, and we interviewed National Research Council staff with program expertise. More specifically, to determine how participating agencies have addressed the SBIR program’s four overarching purposes when implementing their programs, we reviewed SBA documents and data, including SBA’s policy directive on implementation of the SBIR program, minutes from selected meetings of SBA and SBIR program directors, SBA’s SBIR annual report for fiscal year 2008 (the latest year for which an annual report was available), and data on the dollar value of SBIR awards by participating agencies in fiscal year 2009 (the latest year for which SBA could provide the data). We examined relevant documents from participating agencies for fiscal years 2008 through 2010, and for fiscal year 2011 when possible. Documents we reviewed included solicitations for applications issued by each of these agencies, instructions to applicants, minutes from meetings of SBA and SBIR program directors, performance plans and reports, descriptions of commercialization assistance provided to SBIR awardees, and minutes from meetings of agency advisory committees. In addition, we also identified and interviewed SBIR program officials at each agency and officials responsible for implementing programmatic goals. For these interviews, we asked a standard set of questions to help ensure that we obtained consistent information about the SBIR programs at each of the agencies. We also interviewed inspector general staff at NSF, which facilitated SBIR-related activities conducted by the Council of Inspectors General on Integrity and Efficiency. Finally, we interviewed representatives of trade associations about their views of the SBIR program. We selected the trade associations on the basis of their familiarity with the program, the technologies on which they focus, and whether their membership includes small businesses owned by disadvantaged groups and women. The views of the representatives of these associations cannot be generalized to other associations. To determine the extent of SBIR program data available to evaluate progress in increasing commercialization of SBIR technologies, we reviewed documents related to SBA’s SBIR database, including terms of work, work schedules, and proposed guidance related to the development of the government-use portion of the database. For the five SBIR participating agencies whose programs we reviewed, we examined documents dating from 2002 through 2011; these documents reflected commercialization data for SBIR award recipients that had received awards from 1983 (the first year in which agencies issued SBIR awards) through 2010. The documents we reviewed included surveys and other data collection instruments that the agencies used to gather commercialization information from award recipients; reports on data collection results, including any information on SBIR award spending during the years corresponding to those covered in each of the commercialization data collection efforts; and anecdotal descriptions of commercialization success. We also reviewed agency solicitations from fiscal years 2008 through 2010—and for fiscal year 2011 when possible— that contained reporting requirements for award recipients. We interviewed officials at SBA and each of the five participating agencies included in our review to obtain information on the specific commercialization metrics they use to monitor the commercialization experience of award recipients, the history of each agency’s data collection efforts, and the agencies’ experience in obtaining such information from current and past award recipients. We conducted this performance audit from June 2010 to August 2011, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence we obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the individual named above, key contributors to this report include Cheryl Williams, Assistant Director; Antoinette Capaccio; Stephen Carter; Nancy Crothers; Laurie Ellington; Cindy Gilbert; Cynthia Norris; Christine Senteno; and Kiki Theodoropoulos.
Federal agencies with a budget of at least $100 million for research and development (R&D) conducted by others must participate in the Small Business Innovation Research (SBIR) program. SBIR has four purposes: meet federal R&D needs; stimulate technological innovation; increase commercialization (e.g., sales) of innovations based on federal R&D; and encourage participation in innovation by small businesses owned by disadvantaged individuals and women. The Small Business Administration (SBA) oversees the efforts of participating agencies, which make awards to small businesses using SBIR funds. Congress directed SBA to develop a database with commercialization data for government use in evaluating the program. GAO was asked to determine (1) how agencies have addressed SBIR's purposes and (2) the extent of data available to evaluate progress in increasing commercialization. GAO analyzed program documents and interviewed officials at SBA and five agencies that accounted for about 96 percent of SBIR awards. For fiscal years 2008 through 2011, the participating agencies GAO reviewed--the Department of Defense (DOD), the Department of Energy (DOE), the National Aeronautics and Space Administration, the Department of Health and Human Services' National Institutes of Health, and the National Science Foundation (NSF)--addressed SBIR's purposes through solicitations for award applications, technical assistance or matching funds programs, and outreach. In particular, the agencies addressed SBIR purposes related to meeting federal R&D needs and stimulating technological innovation through their solicitations, which included research topics that were designed to meet agencies' respective R&D or mission needs. Agencies also addressed commercialization of innovations through solicitations, as well as through technical assistance for award recipients or through matching funds programs. To provide technical assistance, the agencies contracted with vendors and consultants for help in developing business plans and identifying potential customers for SBIR award recipients, among other things. Agency matching funds programs provided additional SBIR funds to award recipients that obtained commitments from outside investors. Agencies generally addressed the remaining SBIR purpose, encouraging participation by small businesses owned by disadvantaged individuals and women, through outreach activities aimed at a broader audience, such as sharing information on Web sites. However, the effectiveness of these efforts is difficult to evaluate, in part because SBA does not collect data on the number of SBIR applications submitted by such businesses, thus hindering analysis of trends in their submission of applications. Comparable data are not available across participating agencies to evaluate progress in increasing commercialization of SBIR technologies. SBA has not yet expanded an existing database to include commercialization data for program evaluation, but the agency has hired a contractor and allocated funds to develop the expanded database by August 2011. SBA has also worked with participating agencies to develop common metrics for commercialization. In the absence of the expanded database, agencies have independently gathered commercialization data for their own use that are not comparable. In collecting these data, agencies differed in the types of data collection instruments used, dates the instruments were administered, award recipient populations queried, and types of data requested. Furthermore, with the exception of DOD, agencies that GAO reviewed did not generally take steps to verify commercialization data they collected from award recipients, so the accuracy of the data is largely unknown. SBA has worked with SBIR agencies to identify best practices in other areas of program management but has not identified best practices for agencies to use in verifying the accuracy of commercialization data. Implementing the expanded database should improve the comparability of commercialization data available, but a lack of consistent practices for verifying the accuracy of these data may limit their usefulness for programwide evaluation. GAO recommends that SBA work with participating agencies to (1) collect data on applications from small businesses owned by disadvantaged individuals and women and (2) identify best practices for verification of commercialization data. SBA, DOE, and NSF generally agreed with these recommendations; the other agencies GAO reviewed neither agreed nor disagreed.
Antibiotics are substances that destroy microorganisms or inhibit their growth; they have been used for 70 years to treat people who have bacterial infections. In this report, the term antibiotic is used to refer to any substance used to kill or inhibit microorganisms, also sometimes referred to as an antimicrobial. Resistance to penicillin, the first broadly used antibiotic, started to emerge soon after its widespread introduction. Since that time, resistance to other antibiotics has emerged, and antibiotic resistance is becoming an increasingly serious public health problem worldwide. Bacteria acquire antibiotic resistance through mutation of their genetic material or by acquiring genetic material that confers antibiotic resistance from other bacteria. In addition, some bacteria developed resistance to antibiotics naturally, long before the development of commercial antibiotics. Once bacteria in an animal or human host develop resistance, the resistant strain can spread from person to person, animal to animal, or from animals to humans. Antibiotic-resistant bacteria can spread from animals and cause disease in humans through a number of pathways (see fig. 1). For example, unsanitary conditions at slaughter plants and unsafe food handling practices could allow these bacteria to survive on meat products and reach a consumer. Resistant bacteria may also spread to fruits, vegetables, and fish products through soil, well water, and water runoff contaminated by fecal matter from animals harboring these bacteria. If the bacteria are disease-causing, the consumer may develop an infection that is resistant to antibiotics. However, not all bacteria cause illness in humans. For example, there are hundreds of unique strains of Escherichia coli (E. coli), the majority of which are not dangerous. Indeed, while some strains of E. coli are dangerous to humans, many E. coli bacteria strains are a normal component of human and animal digestive systems. The use of antibiotics in animals poses a potential human health risk, but it is also an integral part of intensive animal production in which large numbers of poultry, swine, and cattle are raised in confinement facilities. Over time, food animal production has become more specialized and shifted to larger, denser operations, known as concentrated animal feeding operations. According to a 2009 USDA study, The Transformation of U.S. Livestock Agriculture: Scale, Efficiency, and Risks, this shift has led to greater efficiencies in agricultural productivity—meaning more meat and dairy production for a given commitment of land, labor, and capital resources—and lower wholesale and retail prices for meat and dairy products. However, the study notes larger farms with higher concentrations of animals may be more vulnerable to the rapid spread of animal diseases, which producers may combat by using antibiotics. Some producers elect to raise food animals without using antibiotics, in what are known as alternative modes of production (see app. II for more information about alternative modes of production). Modern dairy production is diverse, ranging from cows housed indoors year-round to cows maintained on pasture nearly year-round. In the United States, milk comes primarily from black and white Holstein cows genetically selected for milk production. Over the years, the concentration of more cows on fewer farms has been accompanied by dramatic increases in production per cow, arising from improved genetic selection, feeds, health care, and management techniques. Expansion to larger herd sizes has also allowed producers to increase the efficiency of production and capitalize on economies of scale. When a cow is no longer able to breed and produce milk, it is usually sold to the market as beef. According to the National Milk Producers’ Federation, dairy producers use antibiotics to treat mastitis, an inflammation of the udder, and other diseases. Any milk produced during antibiotic treatment, and for a specific withdrawal period after treatment has ceased, must be discarded in order to prevent antibiotic residues in milk. This discarded milk imposes an economic cost to dairy producers, so producers generally avoid treating dairy cows with antibiotics when possible. According to the National Milk Producers’ Federation, dairy producers do not use antibiotics for growth promotion that are medically important in human medicine.  Disease treatment: administered only to animals exhibiting clinical signs of disease.  Disease control: administered to a group of animals when a proportion of the animals in the group exhibit clinical signs of disease.  Disease prevention: administered to a group of animals, none of which are exhibiting clinical signs of disease, in a situation where disease is likely to occur if the drug is not administered.  Growth promotion: sometimes referred to as feed efficiency, administered to growing, healthy animals to promote increased weight gain. Such uses are typically administered continuously through the feed or water on a herd- or flock-wide basis. Although such use is not directed at any specifically identified disease, many animal producers believe the use of antibiotics for growth promotion has the additional benefit of preventing disease, and vice versa. In recent years, both FDA and WHO have sought to identify antibiotics that are used in both animals and people and that are important to treat human infections, also known as medically important antibiotics. Specifically, according to FDA, a medically important antibiotic is given the highest ranking—critically important—if it is used to treat foodborne illness and if it is one of only a few alternatives for treating serious human disease. For example, the fluoroquinolone class of antibiotics is critically important to human medicine because it is used to treat foodborne illnesses caused by the bacteria Campylobacter (one of the most common causes of diarrheal illness in the United States), and it is also one of only a few alternatives for treating serious multidrug resistant infections in humans. Some fluoroquinolones are also approved to treat respiratory infections in cattle. Two federal departments are primarily responsible for ensuring the safety of the U.S. food supply, including the safe use of antibiotics in food animals—HHS and USDA. Each department contains multiple agencies that contribute to the national effort to assess, measure, and track antibiotic use and resistance (see table 1). Both HHS and USDA officials have stated that it is likely that the use of antibiotics in animal agriculture leads to some cases of antibiotic resistance among humans and that medically important antibiotics should be used judiciously in animals. As mentioned, HHS and USDA agencies participate in the Interagency Task Force on Antimicrobial Resistance, which developed a plan in 2001 to help federal agencies coordinate efforts related to antibiotic resistance. The 2001 interagency plan contains 84 action items organized in four focus areas: surveillance, prevention and control, research, and product development. According to the 2001 interagency plan, public health surveillance, which includes monitoring for antibiotic resistance, is the ongoing and systematic collection, analysis, and interpretation of data for use in the planning, implementation, and evaluation of public health practice. Many of the plan’s action items focus on antibiotic use and resistance in humans, and some action items address the use of antibiotics in agriculture, including food animal production, and are directly relevant to this report. For example, one action item in the surveillance focus area states the agencies’ intentions to develop and implement procedures for monitoring antibiotic use in agriculture, as well as in human medicine. Another states that agencies will expand surveillance for antibiotic-resistant bacteria in sick and healthy food animals on farms and at slaughter plants, as well as in retail meat, such as chicken, beef, and pork. The action plan also contains action items related to research on alternatives to antibiotics and providing education to producers and veterinarians about appropriate antibiotic use. Since 2001, HHS and USDA have used the interagency task force to coordinate their activities on antibiotic resistance. For example, each year the task force produces an annual report listing activities completed in that year related to the 2001 interagency plan. The task force recently released a 2010 version of the interagency plan, which is still in draft form but is expected to be finalized this year. The draft 2010 interagency plan contains some new initiatives and also reformulates many of the action items listed in the 2001 plan to be more action-oriented. The 2001 interagency plan discusses two types of data needed to understand antibiotic resistance—data on the amount of antibiotics used in food animals (“use data”) and data on the level of antibiotic resistance in bacteria found in food animals and retail meat (“resistance data”). Agencies have collected some data to track antibiotic use in animals, but these data lack crucial details identified by the 2001 interagency plan as essential for agencies to examine trends and understand the relationship between use and resistance. To collect data on antibiotic resistance, agencies have leveraged existing programs, but because these programs were designed for other purposes, their sampling methods do not yield data that are representative of antibiotic resistance in food animals and retail meat across the United States. USDA also collected data on both use and resistance in a pilot program that was discontinued. The 2001 interagency plan set a “top priority” action item of monitoring antibiotic use in veterinary medicine, including monitoring data regarding species and purpose of use. The plan stated this information is essential for interpreting trends and variations in rates of resistance, improving the understanding of the relationship between antibiotic use and resistance, and identifying interventions to prevent and control resistance. The task force’s draft 2010 interagency plan reiterates the importance of monitoring antibiotic use and sets a goal to better define, characterize, and measure the impact of antibiotic use in animals. Three federal efforts collect data about antibiotic use in food animals (see table 2). One of these efforts, run by FDA, was created by Congress as a reporting requirement for pharmaceutical companies to provide sales data. The other two efforts are run by USDA agencies and collect on-farm data on antibiotic use by incorporating questions into existing surveys of food animal producers. Since our 2004 report, FDA has begun to collect and publish data from pharmaceutical companies on antibiotics sold for use in food animals, as required by the Animal Drug User Fee Amendments of 2008 (ADUFA). Under ADUFA, the sponsor of an animal antibiotic—generally a pharmaceutical company—must report annually to FDA: (1) the amount of each antibiotic sold by container size, strength, and dosage form; (2) quantities distributed domestically and quantities exported; and (3) a listing of the target animals and the approved ways each antibiotic can be used (called indications). Section 105 of ADUFA also directs FDA to publish annual summaries of these data. To fulfill this requirement, FDA published the first of these reports on its public Web site in December 2010. (See app. III for examples of antibiotic sales data collected by FDA.) However, to protect confidential business information, as required by statute, FDA’s report summarizes the sales data by antibiotic class, such as penicillin or tetracycline, rather than by specific drug and also aggregates sales data for antibiotic classes with fewer than three distinct sponsors. In submitting the original ADUFA legislation for the House of Representatives to consider, the House Committee on Energy and Commerce stated that it expected these data to further FDA’s analysis of, among other things, antibiotic resistance, but the data do not include crucial details that would be needed to do so. Specifically, ADUFA does not require FDA to collect information on the species in which antibiotics are used and the purpose of their use. According to representatives of all the producer and public health organizations we spoke with, because FDA’s sales data lack information on the species in which the antibiotic is used, these data do not allow the federal government to achieve the antibiotic use monitoring action item in the 2001 interagency plan, including interpreting trends and variations in rates of resistance, improving the understanding of the relationship between antibiotic use and resistance, and identifying interventions to prevent and control resistance. For example, a representative of one public health organization stated that species-specific data is needed to link antibiotic use in animals with resistance in animals and food. Representatives of most of the public health organizations also stated that the government needs to collect data on the purpose of antibiotic use—that is if the antibiotic is being given for disease treatment, disease control, disease prevention, or growth promotion. Furthermore, representatives of some public health organizations indicated that data on antibiotic use should be integrated with information on antibiotic resistance to allow analysis of how antibiotic use affects resistance. However, a representative of an animal pharmaceutical organization stated that FDA should not attempt to collect national-level antibiotic use data and should instead collect local data to facilitate study of farm management practices in order to help farmers better use antibiotics. According to FDA officials, sales data can provide an overall picture of the volume of antibiotics sold for use in animals. However, FDA faces several challenges in collecting detailed antibiotic sales data from drug sponsors. First, if an antibiotic is approved for use in multiple species, drug sponsors may not be able to determine how much of their product is used in a specific species. Second, if an antibiotic is approved for multiple purposes, drug sponsors also may not be able to determine how much is used for each purpose. Third, antibiotics may be stored in inventory or expire before they are used, so the quantity sold and reported to the FDA may not equal the quantity actually used in animals. FDA officials acknowledged the limitation of their current sales data and noted that the agency is exploring potential approaches to gather more detailed sales data or other information on actual antibiotic use. The United States is the world’s largest producer of beef. The beef industry is roughly divided into two production sectors: cow-calf operations and cattle feeding. Beef cattle are born in a cow-calf operation, where both cows and calves are fed grass in a pasture year-round. Once weaned, most cattle are sent to feedlots, where they are fed grain for about 140 days. The beef industry has become increasingly concentrated. According to USDA, feedlots with 1,000 or more head of cattle comprise less than 5 percent of total feedlots in the United States, but market 80 to 90 percent of fed cattle. Weaning, shipping, and processing put stress on cattle and compromise their immune systems. According to the National Cattleman’s Beef Association, beef producers use antibiotics to treat common illnesses, including respiratory disease, eye infections, intestinal disease, anaplasmosis (a red blood cell parasite), and foot infections. Some cattle producers also use antibiotics for growth promotion. Two USDA agencies collect data on antibiotic use from food animal producers by incorporating questions into existing surveys. One of these surveys, managed by APHIS, is the National Animal Health Monitoring System (NAHMS), a periodic, national survey of producers that focuses on animal health and management practices. APHIS staff collect information from producers on how antibiotics are administered (e.g., in water, feed, or injection), what antibiotics they prefer for various ailments, and in what situations they would use an antibiotic. To collect this information, APHIS staff visit farms multiple times over the course of 3 to 6 months and survey producers’ practices. Previous NAHMS surveys have examined management practices for dairy cows, swine, feedlot cattle, cow-calf operations, small broiler chicken flocks, and egg-laying chicken flocks, among other species. APHIS officials told us that one of NAHMS’ strengths is its national scope and that NAHMS can be used to examine changes in animal management practices, including antibiotic use practices, between NAHMS surveys. However, as we reported in 2004, NAHMS produces a snapshot of antibiotic use practices in a particular species, but the data it collects cannot be used to monitor trends in the amount of antibiotics used over time. According to APHIS officials, these limitations remain today. For example, these officials said that NAHMS is limited by long lag times (approximately 6 years) between surveys of the same species, changes in methodology and survey populations between studies, reliance on voluntary participation by food animal producers, and collection of qualitative, rather than quantitative information on antibiotic use. Since our 2004 report, USDA’s ERS has begun to collect information on antibiotic use through the Agricultural Resource Management Survey (ARMS)—a survey of farms conducted since 1996—though these data have limitations similar to those of NAHMS. ERS uses ARMS data to study how production practices, including antibiotic use, affect financial performance and whether specific production practices can substitute for other production practices. For example, a January 2011 ERS study found that broiler chicken producers who forgo subtherapeutic uses of antibiotics (i.e., use in chickens that are not ill) tend to use distinctly different production practices, such as testing flocks and feed for pathogens, fully cleaning chicken houses between each flock, and feeding chickens exclusively from vegetable sources. However, like NAHMS, ARMS cannot be used to examine trends in antibiotic use over time because ERS does not resurvey the same farms over time or conduct annual surveys on specific commodities. According to officials from agencies and some organizations, it is challenging to collect detailed data on antibiotic use in animals from producers for a variety of reasons. First, producers may not always maintain records on antibiotic use. Second, producers who do collect these data may be reluctant to provide them to the federal government voluntarily. FDA is exploring its legal options for requiring producers to report antibiotic use data to FDA. In addition, we observed during our site visits that the types of use data producers collected varied widely. For example, one producer used electronic systems to track all treatments by individual animal, whereas others maintained paper records, and one maintained no records. Also, some food animal species, such as broiler chickens, are generally produced by integrated companies, which own the chickens from birth through processing and contract with a grower to raise them. These growers often receive feed as part of a contract and may not know whether that feed contains antibiotics. For example, one grower we visited did not know that his animals received antibiotics for growth promotion, though the veterinarian from his integrated company indicated that they did. Surveys, such as NAHMS and ARMS, that rely on producers or growers to provide antibiotic use data may be particularly limited by this lack of available data. Moreover, collecting data on-farm from producers is expensive for the federal agencies involved due to the large amount of personnel and time required. Agencies also face challenges collecting antibiotic use data from other sources. For example, use data gathered from veterinarians may be of limited value because, according to FDA officials, many antibiotics can be purchased without veterinary involvement. In cases where antibiotics do require a prescription, the usefulness of records maintained by veterinarians may vary. For example, one veterinary clinic we visited maintained extensive paper records dating back 2 years, but because they were not electronic, these records would be difficult to analyze. In addition, a veterinary organization we spoke with stated that it would be cumbersome for veterinarians to provide this information to an agency because there is no centralized reporting mechanism, such as an electronic database, for them to do so. According to an official from an organization representing the animal feed industry, feed mills also maintain records on antibiotics mixed into animal feed, including the amount of antibiotic used and the type of feed the antibiotic went into. Although feed mills do not intentionally track antibiotic use by species, the official said that collectively, this information could be used to track antibiotic use by species. However, FDA officials told us that collecting use data from feed mills would require the development of a new reporting mechanism for these data. In 2004, we reported that the federal government collects resistance data through the National Antimicrobial Resistance Monitoring System (NARMS), established in 1996. NARMS is an interagency effort that monitors antibiotic resistance in certain bacteria under three programs: the animal component, led by ARS, samples bacteria from food animals at slaughter plants; the retail meat component, led by FDA, samples retail meat purchased from grocery stores; and the human component, led by CDC, samples bacteria from humans (see table 3). FDA serves as the funding and coordinating agency. From fiscal years 2006 through 2010, the NARMS budget remained constant at $6.7 million, with ARS, FDA, and CDC receiving $1.4 million, $3.5 million, and $1.8 million, respectively. NARMS received a funding increase in fiscal year 2011, to $7.8 million. The 2001 interagency plan contains an action item stating agencies will design and implement a national antibiotic resistance surveillance plan. Among other things, the 2001 interagency plan states that agencies will expand and enhance coordination of surveillance for drug-resistant bacteria in sick and healthy animals on farms, food animals at slaughter plants, and retail meat. The plan also states that collecting data on antibiotic resistance will help agencies detect resistance trends and improve their understanding of the relationship between use and resistance. The draft 2010 interagency plan also reiterates the importance of resistance surveillance and includes several action items aimed at strengthening, expanding, and coordinating surveillance systems for antibiotic resistance. According to WHO’s Surveillance Standards for Antimicrobial Resistance, which provides a framework to review existing antibiotic resistance surveillance efforts, populations sampled for surveillance purposes should normally be representative of the total population—in this case, food animals and retail meat in the United States. Additionally, WHO’s surveillance standards state that it is important to understand the relationship of the population surveyed to the wider population, meaning that agencies should understand how food animals and retail meat surveyed in NARMS are similar to food animals and retail meat throughout the United States. The food animal component of NARMS, led by ARS, gathers bacteria from food animal carcasses at slaughter plants and tests them for antibiotic resistance, but because of a change in sampling method has become less representative of food animals across the United States since we reported in 2004. ARS receives these samples from an FSIS regulatory program called the Hazard Analysis and Critical Control Points (HACCP) verification testing program, which is designed to, among other things, reduce the incidence of foodborne illness. FSIS inspectors work in slaughter plants around the country, where they collect samples from carcasses to test for foodborne pathogens, among other duties. When we last reported on antibiotic resistance in 2004, HACCP verification testing included two sampling programs—a nontargeted program, in which inspectors sampled randomly selected plants, and a targeted program, in which slaughter plants with a higher prevalence of bacteria causing foodborne illness were more likely to be selected for additional sampling. In 2006, FSIS eliminated the random sampling program, which FSIS officials told us has allowed the agency to use its resources more effectively. FSIS now conducts only targeted sampling of food animals in its HACCP verification testing. This nonrandom sampling method means the NARMS data obtained through HACCP are not representative of food animals across the country and cannot be used for trend analysis because bacteria tested by NARMS are now collected at greater rates from slaughter plants that are not in compliance with food safety standards. According to FDA officials, due to this sampling method, the resulting data are skewed for NARMS purposes. The NARMS retail meat component, led by FDA, collects samples of meat sold in grocery stores and tests them for antibiotic-resistant bacteria, but these samples may not be representative of retail meat throughout the United States. The program began in 2002 and has since expanded to collect retail meat samples from 11 states: the 10 participant states in CDC’s FoodNet program, which conducts surveillance for foodborne diseases, plus Pennsylvania, which volunteered to participate in retail meat sampling (See table 3 for the types of bacteria tested). Due to its nonrandom selection of states, FDA cannot determine the extent to which NARMS retail meat samples are representative of the United States. FDA collects bacteria from those states that volunteer to participate in the program, so some regions of the country are not represented in the NARMS retail meat program. According to the FDA Science Advisory Board’s 2007 review of NARMS, this lack of a national sampling strategy limits a broader interpretation of NARMS data. According to FDA officials, FDA has not analyzed how representative these samples are of the national retail meat supply in the United States but officials believe that the samples provide useful data that serves as an indicator for monitoring US retail meat. FDA is aware of the sampling limitations in NARMS and has articulated a strategic goal of making NARMS sampling more representative and applicable to trend analysis in a draft 2011-2015 NARMS Strategic Plan, which was released for public comment in January 2011. The comment period closed in May 2011, and FDA is currently making changes to the plan based on the submitted comments. The plan states that NARMS will become more representative by, among other things, modifying its animal sampling to overcome the biases resulting from the current reliance on HACCP verification testing and improving the geographic representation of retail meat testing, though FDA has not yet planned specific actions to achieve this goal. According to FDA officials, in light of increased funding for NARMS in 2011, they are exploring ways to improve NARMS sampling to make it more representative. FDA hosted a public meeting in July 2011 to solicit public comment on NARMS animal and retail meat sampling improvements. At this meeting, ARS officials discussed two new on-farm projects—one pilot project, in collaboration with FDA, plans to collect samples from feedlot cattle, dairy cows, and poultry with the goal of evaluating potential sampling sites within the food animal production chain (e.g., on farms or in holding pens at slaughter plants). The second project is in collaboration with Ohio State University and plans to use industry personnel to collect samples from poultry and swine producers. Both projects will test samples for antibiotic resistance through NARMS. Some of the additional suggestions discussed during this meeting included changing FSIS sampling to provide more representative data to NARMS, discontinuing slaughter plant sampling altogether in favor of an on-farm sampling program, and increasing the number of state participants in the retail meat sampling program. The NARMS human component, led by CDC, collects and tests bacteria from health departments in all 50 states and the District of Columbia. We reviewed the issue of antibiotic resistance and antibiotic use in humans in 2011. This review examined, among other things, the human component of NARMS and concluded that CDC’s data is nationally-representative for four of the five bacteria included in the program. In our interviews, representatives of producer and public health organizations identified several challenges associated with collecting data on antibiotic resistance. First, according to representatives from most public health organizations, ARS, FDA, and CDC are limited by available funding. Sampling and testing bacteria can be expensive, and agencies have to balance competing priorities when allocating resources. For example, in the NARMS retail meat program, FDA could choose to expand retail meat sampling geographically by adding new states to the program, expand the number of bacteria tested, expand the number of samples collected, or expand the types of meat sampled. Second, according to representatives of several producer and public health organizations, agencies may face challenges cooperating and reaching consensus with one another. For example, NARMS reports do not include interpretation of resistance trends across NARMS components. Specifically, while NARMS issues annual Executive Reports that combine data from all three components of NARMS (available on FDA’s Web site), these reports do not provide interpretation of NARMS data. According to FDA officials, it is difficult to develop consensus on interpretation for these reports because agencies differ in their interpretations and preferred presentations of NARMS data. Third, according to the FDA Science Advisory Board’s 2007 review of NARMS, the lag between NARMS data collection and report issuance can sometimes be excessive. For example, as of August 2011, the latest NARMS Executive Report covered 2008 data. According to FDA and CDC officials, the process of testing bacteria, analyzing and compiling data, and obtaining approval from agencies is time-consuming and increases the lag time of NARMS reports. In our interviews, representatives of public health organizations also suggested that federal agencies collect additional types of resistance data. First, representatives of several organizations suggested that agencies expand the types of bacteria tested for antibiotic resistance. FDA is aware of this suggestion and has considered whether to add to the types of bacteria it tests. For example, recent studies have discussed methicillin-resistant Staphylococcus aureus (MRSA) in retail meat. MRSA is a type of bacteria that is resistant to several antibiotics, including penicillin, and that can cause skin infections in humans and more severe infections in health care settings. In response, FDA is conducting a pilot study to collect data on the prevalence of MRSA in retail meat. However, according to FDA officials, FDA is unlikely to include MRSA in its regular NARMS testing because general consensus in the scientific community is that food does not transmit community-acquired MRSA infections in humans. Second, representatives of three public health organizations suggested that federal agencies link resistance data with data on outbreaks of foodborne illness in humans, which representatives of one organization stated could help scientists document the link between animal antibiotic use and resistant outbreaks of foodborne illness. According to representatives of this organization, NARMS’ resistance data are not currently linked to information about foodborne disease outbreaks. According to CDC officials, CDC tests bacteria associated with foodborne illness outbreaks in humans for antibiotic resistance, but does not routinely publish these data. When we last reported on antibiotic resistance in 2004, APHIS, ARS, and FSIS collected on-farm use and resistance data from 40 swine producers through the pilot Collaboration in Animal Health and Food Safety Epidemiology (CAHFSE), but this program faced challenges in collecting data and was discontinued in 2006 due to lack of funding. By collecting information from the same facilities over time, agencies could use CAHFSE data to examine the relationship between antibiotic use and resistance. However, according to officials at APHIS and ARS, collecting quarterly on-farm data was burdensome and generated a large number of bacterial samples, which were costly to test and store. Although the agencies wanted to use CAHFSE to monitor antibiotic resistance throughout the food production system, officials from all three agencies told us that this “farm to fork” monitoring raised logistical challenges. For example, FSIS officials examined the feasibility of monitoring resistance data through the slaughter plant but discovered that slaughter plants were reluctant to participate in the program due to fear of enforcement actions and confidentiality concerns. According to APHIS officials, CAHFSE released quarterly and annual data summaries, but it did not issue an overall capping report or formal evaluation of the program. CAHFSE was discontinued, but NAHMS continues to collect three types of bacteria (Salmonella, Campylobacter, and E. coli) from a subset of surveyed producers and sends them to ARS for antibiotic resistance testing. However, as discussed earlier in this report, NAHMS data provide a snapshot of a particular species but cannot be used to monitor trends. Additionally, as discussed earlier in this report, ARS has started two on- farm projects to collect bacteria from food animals. In one of these projects, which collects samples from poultry and swine, ARS partners with integrated companies to collect a variety of samples from producers. According to an ARS official, because personnel to collect samples were responsible for the majority of costs in the CAHFSE program, using industry personnel rather than ARS staff to collect on-farm samples can significantly reduce the costs of on-farm sampling. Although data on both use and resistance can be difficult to collect, other countries have been successful in doing so. For example, the Canadian government’s Canadian Integrated Program on Antimicrobial Resistance Surveillance (CIPARS), created in 2002, provides an example of on-farm collection of antibiotic use and resistance data. In addition to gathering resistance data similar to NARMS, CIPARS also has an on-farm component, which collects antibiotic use information annually from about 100 swine producers and integrates it with data from resistance testing on fecal samples from the same farms. CIPARS addresses funding limitations by restricting on-farm surveillance to swine, sampling annually rather than quarterly, and collecting slaughter plant samples through industry personnel. A CIPARS official stated that the program’s on-farm data could be used to link antibiotic use and antibiotic resistance at the herd level and help identify interventions to prevent antibiotic resistance. CIPARS issues annual reports, which include interpretation of the data such as discussions of trends over time. For example, the most recent report, from 2007, noted an increase in the percentage of bacteria resistant to several antibiotics in samples collected from pigs at slaughter plants from 2003 to 2007. Denmark also has a use and resistance data collection system, called the Danish Integrated Antimicrobial Resistance Monitoring and Research Program (DANMAP). Data collection covers antibiotic use in food animals and humans, as well as antibiotic resistance in food animals, meat in slaughter plants and at retail, and in humans. The objectives of DANMAP are to monitor antibiotic use in food animals and humans; monitor antibiotic resistance in bacteria from food animals, food of animal origin, and humans; study associations between antibiotic use and resistance; and identify routes of transmission and areas for further research studies. According to DANMAP officials, Denmark achieves these goals by gathering data on veterinary prescriptions, since all antibiotic use in Denmark is via prescription-only. For veterinary prescriptions, these officials told us Denmark gathers data on the medicine being prescribed, the intended species and age group in which the prescription will be used, the prescribed dose of the antibiotic, the prescribing veterinarian, and the farm on which the prescription will be used. Further, DANMAP collects information on antibiotic resistance in food animals, from healthy animals at slaughter plants and from diagnostic laboratory submissions from sick animals. Denmark also gathers both domestically produced and imported retail meat samples from throughout the country to test for antibiotic resistance. DANMAP officials noted that, in Denmark, the industry is responsible for collecting and submitting bacterial samples from slaughter plants for testing, according to a voluntary agreement, and that the industry spends additional funds to do so. DANMAP issues annual reports, which include interpretation of data on antibiotic use in animals and humans, as well as data on antibiotic resistance in bacteria from food animals, retail meat, and humans. Some DANMAP reports also include more detailed analysis of particular areas of interest. For example, the 2009 DANMAP report examined E. coli resistant to penicillins in pigs, retail meat, and humans and found that antibiotic use in both animals and humans contributes to the development of penicillin-resistant E. Coli. See appendix IV for more information on DANMAP. FDA implemented a risk assessment process for antibiotic sponsors, generally pharmaceutical companies, to mitigate the risk of resistance in food animals to antibiotics approved since 2003. However, the majority of antibiotics used in food animals were approved prior to 2003, and FDA faces significant resource challenges in assessing and mitigating the risk of older antibiotics. Instead, FDA has proposed a voluntary strategy to mitigate this risk but has neither developed a plan nor collected the “purpose of use” data necessary to measure the effectiveness of its strategy. FDA approves for sale, and regulates the manufacture and distribution of, drugs used in veterinary medicine, including drugs given to food animals. Prior to approving a new animal drug application, FDA must determine that the drug is safe and effective for its intended use in the animal. It must also determine that the new drug intended for animals is safe with regard to human health, meaning that there is reasonable certainty of no harm to human health from the proposed use of the drug in animals. FDA may also take action to withdraw an animal drug when new evidence shows that it is not safe with regard to human health under the approved conditions of use. In 2003, FDA issued guidance recommending that antibiotic sponsors include a risk assessment of any new antibiotics for use in food animals. The guidance is known as Evaluating the Safety of Antimicrobial New Animal Drugs with Regard to Their Microbiological Effects on Bacteria of Human Health Concern, Guidance for Industry #152. Under this framework, an antibiotic sponsor would assess three factors: the probability that the resistant bacteria are present in the animal as a consequence of the antibiotic use, the probability that humans would ingest the bacteria in question, and the probability that human exposure to resistant bacteria would result in an adverse health consequence. As part of the third factor, the sponsor considers the importance of the antibiotic to treating human illness, under the assumption that the consequences of resistance are more serious for more important antibiotics. The guidance provides a preliminary ranking of antibiotics considered medically important to human medicine, with the highest ranking assigned to antibiotics deemed “critically important” if it is (1) used to treat foodborne illness and (2) one of only a few alternatives for treating serious human disease. An antibiotic is considered highly important if it meets one of these two criteria. By considering all three factors, the sponsor estimates the overall risk of the antibiotic’s use in food animals adversely affecting human health. Though this risk assessment process is recommended by FDA, the antibiotic sponsor is free to prove the safety of a drug in other ways and to consult with FDA to decide if the approach is recommended for its animal antibiotic application. FDA officials said that, in practice, the risk of antibiotic resistance is considered as part of any new animal antibiotic approval. According to FDA documents, this risk assessment process has been effective at mitigating the risk of resistance posed by new antibiotics because antibiotic sponsors usually consider the risk assessment process in their product development, so the products ultimately submitted for approval are intended to minimize resistance development. Representatives of some producer, public health, and veterinary organizations, as well as an animal pharmaceutical organization, told us that they were generally satisfied with the risk assessment approach. For example, a representative of an animal pharmaceutical organization commented that the risk assessment process was helpful in that it provided a clear road map for drug approvals. Representatives of a veterinary organization said they were pleased that new antibiotics were examined using a comprehensive, evidence-based approach to risk assessment. However, several organizations also raised concerns. For instance, a representative of an animal pharmaceutical organization said that FDA’s risk assessment process was an overly protective “blunt instrument,” since FDA would likely not approve any antibiotic product designed for use in feed to prevent or control disease in a herd or flock if the antibiotic is critically important to human health. Representatives from this pharmaceutical organization and a veterinary organization said that FDA’s guidance makes it very difficult for antibiotic sponsors to gain approval for new antibiotics for use in feed or water. In addition, representatives of several public health organizations said that flaws in the criteria FDA used to rank medically important antibiotics may lead the agency to the inappropriate approval of animal antibiotics. For example, they identified a class of antibiotics known as fourth- generation cephalosporins, which are an important treatment for pneumonia in humans and one of the sole therapies for cancer patients with certain complications from chemotherapy. However, since neither of these are also foodborne diseases, under FDA criteria this antibiotic is not ranked as critically important in treating human illness, which these organizations said could lead to the approval of fourth-generation cephalosporins for use in food animals and, eventually, increased antibiotic resistance. FDA officials recently said they intend to revisit the antibiotic rankings to reflect current information. However, FDA officials noted that they believed the current ranking appropriately focused on antibiotics used to treat foodborne illnesses in humans given that the objective of the guidance was to examine the risk of antibiotic use in food animals. According to FDA officials, the majority of antibiotics used in food animals were approved prior to 2003. FDA faces significant challenges to withdraw agency approval, either in whole or in part, of these antibiotics if concerns arise about the safety of an antibiotic. If FDA initiates a withdrawal action because of safety questions that have arisen after an antibiotic’s approval, the agency has the initial burden of producing evidence sufficient to raise serious questions about the safety of the drug. Once FDA meets this initial burden of proof, the legal burden then shifts to the antibiotic sponsor to demonstrate the safety of the drug. If, after a hearing, the FDA Commissioner finds, based on the evidence produced, that the antibiotic has not been shown to be safe, then the product approval can be withdrawn. FDA’s 5-year effort to withdraw approval for one antibiotic for use in poultry illustrates the resource-intensive nature of meeting the legal burden to withdraw an approved antibiotic. It is the only example of FDA withdrawing an antibiotic’s approval for use in food animals because of concerns about resistance. Specifically, Enrofloxacin, approved in October 1996, is in the critically important fluoroquinolone class of antibiotics, used to treat foodborne illnesses caused by the bacteria Campylobacter, and it was used in poultry flocks via the water supply to control mortality associated with E. coli and other organisms. In October 2000, based on evidence of increased fluoroquinolone resistance in bacteria from animals and humans, FDA initiated a proceeding to withdraw its approval for the use of two types of fluoroquinolones in poultry. One pharmaceutical company voluntarily discontinued production, but the manufacturer of enrofloxacin challenged the decision. FDA officials told us that it took significant time and resources to gather evidence for the case, even though they had good data showing a correlation between the drug’s approval for use in poultry and increasing resistance rates in humans. After an administrative law judge found that enrofloxacin was not shown to be safe for use in poultry as previously approved, the FDA’s Commissioner issued the final order withdrawing approval for its use effective September 2005. FDA officials said that from this case they learned that taking a case-by- case approach to withdrawing antibiotics due to concerns over resistance was time-consuming and challenging. In our 2004 review of federal efforts to address antibiotic resistance risk, we reported FDA was planning to conduct similar risk assessments of other previously approved antibiotics. FDA officials estimated, however, that the enrofloxacin withdrawal cost FDA approximately $3.3 million, which they said was significant. FDA officials told us that conducting individual postapproval risk assessments for all of the antibiotics approved prior to 2003 would be prohibitively resource intensive, and that pursuing this approach could further delay progress on the issue. Instead of conducting risk assessments for individual antibiotics approved prior to 2003, FDA in June 2010 proposed a strategy to promote the “judicious use” of antibiotics in food animals. FDA proposed the strategy in draft guidance titled The Judicious Use of Medically Important Antimicrobial Drugs in Food-Producing Animals, draft Guidance for Industry #209. FDA describes judicious uses as those appropriate and necessary to maintain the health of the food animal. The draft guidance includes two principles aimed at ensuring the judicious use of medically important antibiotics. First, that antibiotic use is limited to uses necessary for assuring animal health—such as to prevent, control, and treat diseases. Second, that animal antibiotic use is undertaken with increased veterinary oversight or consultation. To implement the first principle, FDA is working with antibiotic sponsors to voluntarily phase out growth promotion uses of their antibiotics. FDA officials told us they have met with four of the approximately nine major antibiotic sponsors to discuss withdrawing growth promotion uses from their antibiotics’ labels and that they plan to engage with generic antibiotic manufacturers in the near future. To implement the second principle of increasing veterinarian oversight of antibiotic use, FDA officials told us that they would like to work with antibiotic sponsors to voluntarily change the availability of medically important antibiotics currently approved for use in feed from over the counter to veterinary feed directive (VFD) status. The majority of in-feed antibiotics are currently available over the counter, but VFD status would instead require these antibiotics to be used with the professional supervision of a licensed veterinarian. In March 2010, FDA issued an advance notice of proposed rulemaking announcing its intention to identify possible changes to improve its current rule on VFDs and seeking public comments on how to do so. FDA officials told us that they received approximately 80 comments by the end of the comment period in August 2010 from interested parties on how to improve the VFD rule, and were taking them into consideration as they drafted the rule, which they hope to publish in 2011. In April 2011, the American Veterinary Medicine Association also formed a new committee to help FDA develop practical means to increase veterinary oversight of antibiotic use. Representatives of several producer organizations, veterinary organizations, and an animal pharmaceutical organization expressed concern that FDA’s focus on ending growth promotion uses would adversely affect animal health. In particular, these representatives said that some animal antibiotics approved for growth promotion may also prevent disease, though they are not currently approved for that purpose. FDA officials said that, in cases where pharmaceutical companies can prove such claims, FDA would be willing to approve these antibiotics for disease prevention. FDA officials emphasized, however, that they do not want companies to relabel existing growth promotion antibiotics with new disease prevention claims with no substantive change in the way antibiotics are actually used on the farm. FDA officials told us they plan to issue additional guidance for antibiotic sponsors to outline a specific process for making changes in product labels. Furthermore, representatives of several producer and veterinary organizations we spoke with expressed concerns about FDA’s efforts to increase veterinary oversight because there is shortage of large animal veterinarians. As we reported in February 2009, there is a growing shortage of veterinarians nationwide, particularly of veterinarians who care for food animals, serve in rural communities, and have training in public health. Additionally, representatives of veterinary organizations said that the paperwork requirements under VFDs are onerous. In particular, this is because VFDs require the veterinarian to deliver a copy of the VFD to the feed producer directly for each VFD, and there are not yet many systems for electronic distribution. In addition, representatives of several public health organizations expressed concern that FDA’s strategy will not change how antibiotics are used for two reasons. First, because FDA is depending on voluntary cooperation to remove growth promotion uses from antibiotic labels, there is no guarantee that pharmaceutical companies will voluntarily agree to relabel their antibiotics. To underline the seriousness of their concerns, in May 2011, several public health organizations filed a suit to force FDA to withdraw its approval for the growth promotion uses of two antibiotic classes (penicillins and tetracyclines). Second, representatives of some public health organizations noted that several medically important antibiotics (six out of eight) currently approved by FDA for growth promotion or feed efficiency are already approved for disease prevention uses in some species (see table 4), which could negate the impact of FDA’s strategy. Because disease prevention dosages often overlap with growth promotion dosages, representatives of one of these organizations said that food animal producers might simply alter the purpose for which the antibiotics are used without altering their behavior on the farm. One veterinarian told us that if FDA withdrew an antibiotic’s approval for growth promotion, he could continue to give the antibiotic to the animals under his care at higher doses for prevention of a disease commonly found in this species. The veterinarian stated that there is an incentive to do so because using an animal antibiotic can help the producers he serves use less feed, resulting in cost savings. For example, the in-feed antibiotic may cost approximately $1 per ton of feed, but it can save $2 to $3 per ton of feed, making it an effective choice for the producer. Although representatives of some producer and public health organizations have raised doubts about the effectiveness of FDA’s strategy, FDA does not have a plan to collect the data necessary to understand the purpose for which antibiotics are being used or have a plan to measure the effectiveness of its strategy to encourage more judicious use of antibiotics in animals. FDA officials told us the agency will consider this strategy to be successful when all the growth promotion uses of medically important antibiotics are phased out. FDA officials were unable to provide a timeline for phasing out growth promotion uses, though they identified several next steps FDA intends to take, such as finalizing the guidance document describing their voluntary strategy and issuing additional guidance on its implementation, as well as proceeding forward with the VFD rulemaking process. However, FDA officials stated that the agency had no further plans to measure its progress. In addition, FDA will still allow medically important antibiotics to be used for disease prevention. However, because agency data on sales of antibiotics used in food animals do not include the purpose for which the antibiotics are used, it will be difficult for FDA to evaluate whether its strategy has increased the judicious use of antibiotics or simply encouraged a shift in the purpose of use—for instance, from growth promotion to disease prevention—without lessening use. FDA officials told us the agency is exploring approaches for obtaining additional information related to antimicrobial drug use to enhance the antibiotic sales data that is currently reported to FDA as required by ADUFA, but did not provide a timeline for these efforts. USDA and HHS agencies have taken some steps to research alternatives to current antibiotic use practices and educate producers and veterinarians on appropriate use of antibiotics but the extent of these steps is unclear because neither USDA nor HHS has assessed the progress toward fulfilling the related action items in the 2001 interagency plan. An action item in the 2001 interagency plan states that federal agencies will promote the development of alternatives to current antibiotic use, including through research. According to the 2001 interagency plan, such alternatives could include researching vaccines and management practices that prevent illnesses or reduce the need for antibiotic use. However, USDA has not tracked its activities in this area, and neither USDA nor HHS has determined progress made toward this action item. Since 2001, USDA agencies have undertaken some research related to developing alternatives. However, according to agency officials they are unable to provide a complete list of these activities because USDA’s research database is not set up to track research at this level of detail. Instead, research is categorized within the larger food safety research portfolio. In addition, the agencies did not report any activities under this action item in the annual reports published by the interagency task force. Based on documents provided by USDA and research activities that USDA reported to the interagency task force under other research action items, we identified 22 projects the department funded since 2001 related to alternatives to current antibiotic use practices, with total funding of at least $10 million (see app. V). In addition, ARS officials emphasized that the majority of research performed at ARS related to improving agricultural practices can result in reduced antibiotic needs by producers. Officials from both NIFA and ARS said that they had not assessed the extent to which the research conducted helped achieve the action item in the 2001 interagency plan. Indeed, conducting such an assessment would be difficult without a complete list of relevant research activities. NIFA officials told us that additional funding and resources would be needed to conduct such an assessment, but they did not provide more specific details on how many additional resources would be needed to do so. Although an assessment of research activities on alternatives has not been conducted, ARS officials nevertheless said the agency plans to conduct more research on alternatives to antibiotics in the next 5 years. Similar to USDA agencies, HHS agencies have conducted some research on alternatives. Specifically, from 2001 through 2005, CDC and FDA sponsored at least five research grants that included funding to research alternatives and reduce resistant bacteria in food animals (see app. VI). NIH has conducted research related to antibiotic resistance that may have applications in both humans and in animals, but agency officials told us that NIH considers human health issues its research priority. Like USDA agencies, HHS agencies did not report any research activities under the action item related to antibiotic alternatives to the interagency task force. No HHS agency has sponsored any such research activities since 2005. HHS officials told us this is because USDA may be the most appropriate lead agency for undertaking alternatives research related to food animals. USDA officials acknowledged that they have a role in researching alternatives to antibiotics, although they said that it is also important for HHS to be involved since FDA would likely be the regulatory agency to approve any products resulting from such research. CDC and FDA officials told us that their agencies have not performed any assessments to determine whether their research activities have helped the agency to fulfill this action item in the 2001 interagency plan. Representatives of the national veterinary, producer, public health, and animal pharmaceutical organizations that we spoke with told us that greater federal efforts are needed to research alternatives to current antibiotic use in animals. In addition, representatives from most of the veterinary and several public health organizations we spoke with said that the federal government should make greater efforts to coordinate with the food animal industry about researching alternatives to current antibiotic use. Specifically, most representatives from the producer and veterinary organizations emphasized a need for the federal government to provide funding and other resources to the food animal industry for research projects looking at alternatives. For example, representatives from one veterinary organization told us that several national producer and veterinary organizations have goals of utilizing prevention as an alternative to antibiotic use and said that the federal government could help by conducting research on preventive measures such as vaccine development. The draft 2010 interagency plan includes an action item reiterating that agencies will conduct research on alternatives to current antibiotic use practices, yet USDA and HHS agencies have not evaluated their previous research to determine the extent to which the action item in the 2001 interagency plan was achieved. Without an assessment of past research efforts, agencies may be limited in their ability to identify gaps where additional research is needed. In addition, the draft 2010 interagency plan does not identify steps agencies intend to take to conduct research on alternatives or time frames for taking these steps. In contrast, other action items listed in the draft 2010 interagency plan under the surveillance, prevention and control, and product development focus areas include specific implementation steps illustrating how agencies plan to achieve them. CDC officials told us that the interagency task force agreed not to identify implementation steps until after the final version of the 2010 interagency plan is published, at which time the task force will publish its plans for updating the 2010 interagency plan. In addition, ARS officials said that the interagency task force requested agencies to identify implementation steps that could be accomplished within the next 2 years, and USDA was unable to determine such steps for alternatives research. We have previously reported that evaluating performance allows organizations to track the progress they are making toward their goals, and it gives managers critical information on which to base decisions for improving their programs. Tracking progress and making sound decisions is particularly important in light of the fiscal pressures currently facing the federal government. An action item in the 2001 interagency plan states that federal agencies will educate producers and veterinarians about appropriate antibiotic use. Programs at both HHS and USDA have sought to educate users about appropriate antibiotic use, but the impact of these efforts has not been assessed. In addition, agricultural extension agents and national associations also advise producers on appropriate antibiotic use. The draft 2010 interagency plan no longer has an explicit action item related to appropriate antibiotic use education. There is currently one education activity on appropriate antibiotic use, and after the completion of this effort, there are no plans to develop new education activities. HHS agencies sponsored six programs to educate producers and veterinarians about appropriate antibiotic use, the last of which ended in 2010 (see table 5). For example, from 2001 through 2010 CDC funded “Get Smart: Know When Antibiotics Work on the Farm”—also called Get Smart on the Farm—an outreach program that sponsored state-based producer education activities to promote appropriate antibiotic use. CDC officials told us that this was one of the first major education efforts to bring together stakeholders from the public health, veterinary, and agricultural communities to discuss the issue of appropriate antibiotic use. Through the Get Smart on the Farm program, CDC hosted three national animal health conferences designed to foster partnerships between these stakeholders. These conferences included discussions of antibiotic use and resistance in animals. Get Smart on the Farm also funded the development of an online curriculum for veterinary students on antibiotic resistance and appropriate use, which became available in December 2010. CDC officials told us that the agency is planning to take an advisory rather than leadership role in future appropriate use education efforts because they believe that FDA and USDA are the appropriate agencies for leading such efforts. CDC reported that it spent approximately $1.7 million on Get Smart on the Farm activities from 2003 through 2010. Both CDC and FDA officials said that the impact of their education activities had not been assessed. HHS officials also said that they currently do not have plans to develop new activities in the future. USDA agencies also sponsored education programs addressing appropriate antibiotic use in animals (see table 6). For example, from 2002 through 2005, USDA agencies worked with FDA to fund university- based programs that sought to educate producers on animal health issues, including antibiotic resistance. From 2006 through 2010 USDA agencies did not report any activities under this action item in the annual reports published by the interagency task force. However, officials noted that education on appropriate antibiotic use remains a priority and that during these years USDA gave presentations at scientific meetings and universities on this topic. USDA officials said the impact of these education efforts was not assessed. The one ongoing USDA appropriate antibiotic use education activity is an APHIS-funded training module on antibiotic resistance currently under development at a cost of $70,400. According to agency officials, the module will be similar to CDC’s online curriculum for veterinary students. It will be 1 of 19 continuing education modules for the National Veterinary Accreditation Program, which is designed to train veterinarians to assist the federal government with animal health and regulatory services. The program requires participating veterinarians to periodically renew their accreditations by completing continuing education modules online or at conferences, and participants may elect which APHIS-approved modules to take in order to fulfill their requirements. Since the APHIS module will be similar to CDC’s online curriculum for veterinary students, APHIS officials told us that they will look at CDC’s content to determine whether or not to incorporate it into the APHIS-funded module. APHIS officials also told us that they sought out representatives from NIFA, FDA, CDC, the American Veterinary Medical Association, and academic institutions to review the module’s content, and expect the training to be available for veterinarians by June 2012. APHIS officials told us that the module on appropriate antibiotic use is not within the National Veterinary Accreditation Program’s traditional scope of work. More specifically, APHIS officials are unsure how they would measure the impact of the module because, unlike the other modules in the accreditation program, it is not based on any APHIS regulatory information that can be tracked. That said, officials told us providing antibiotic use education is beneficial and will increase practitioners’ awareness in this area. After the completion of the antibiotic use module, USDA officials said they have no plans to develop new education activities. Additional USDA-funded education activities on appropriate antibiotic use may be conducted through local extension programs. Each U.S. state and territory has a Cooperative Extension office at its land-grant university, as well as a network of local or regional extension offices staffed by one or more experts who provide research-based information to agricultural producers, small business owners, youth, consumers, and others in local communities. NIFA provides federal funding to the extension system, though states and counties also contribute to the program. NIFA provides program leadership and seeks to help the system identify and address current agriculture-related issues. Two producers told us that extension programs are a helpful source of information about animal health issues. For example, they said that extension agents are very helpful in disseminating information, though their impact may be difficult to measure. In addition, they told us that when producers are successful with a preventative practice suggested by an extension agent, neighboring producers may notice and also make similar modifications, creating a multiplier effect. Two current extension agents also told us they have received inquiries from producers about antibiotic use, although these questions are not necessarily framed as appropriate use. NIFA officials told us that federally funded extension institutions submit an annual plan of work and an annual accomplishment report that provides a general overview of their yearly planned projects based on USDA priorities, but these plans are broad in nature and often do not provide details that allow NIFA to track efforts related to antibiotic use. Representatives from most of the producer and veterinary organizations that we spoke with said that industry-led efforts are responsible for most of the progress made in educating producers and veterinarians in the last 10 years. For example, the National Cattlemen’s Beef Association, National Milk Producers’ Federation, and National Pork Board have each developed Quality Assurance programs that advise producers on their views of proper antibiotic use during production. Representatives from most of the organizations we spoke with said that the federal government should have some type of role in educating producers and veterinarians on appropriate antibiotic use, but many—including representatives from all of the producer organizations—said that they believe that these activities should be done in collaboration with industry. Representatives from most of the veterinary and producer organizations also said the federal government could improve collaboration with industry members and groups, and representatives from one veterinary organization pointed to previous federal education efforts to collect and disseminate information about avian influenza as collaborative education efforts federal agencies could model for appropriate use messages. Representatives from this organization noted that such efforts included the federal government and other industry stakeholders working together and disseminating education messages to the public. They also suggested that similar efforts between the federal government, producers, and researchers could be used to educate the industry about appropriate use of antibiotics in food animals. Since 1995, the EU and Denmark have taken a variety of actions to regulate antibiotic use in food animals and mitigate the risk such use may pose to humans. Denmark is part of the EU and complies with EU policies but has also taken some additional actions independently. Some of the experiences in the EU and Denmark may be useful for U.S. government officials and producers, though U.S. producers face different animal health challenges and regulatory requirements than European producers. From 1995 to 2006, both the EU and Danish governments took a variety of actions to regulate antibiotic use in food animals (see fig. 2). In 1995, Denmark banned the use of avoparcin for growth promotion in food animals, and an EU-wide ban followed in 1997. Avoparcin is similar to the human medicine vancomycin, and some studies suggested that avoparcin use in food animals could be contributing to vancomycin- resistant bacteria in humans. Both Denmark and the EU followed up with bans on several additional growth promotion antibiotics, culminating in a total ban on growth promotion antibiotics in 2000 and 2006, respectively. Government and industry officials we spoke with in Denmark emphasized that their bans on growth promotion antibiotics began as voluntary industry efforts that were later implemented as regulations by the government. EU officials and both industry and government officials from Denmark said the most important factor in the development of their policies was sustained consumer interest in the issue of antibiotic use in food animals and concerns that such use could cause resistance affecting humans. In the face of these concerns, officials explained that EU policies were developed based in part on the precautionary principle, which states that where there are threats of serious or irreversible damage, lack of scientific certainty should not postpone cost-effective measures to reduce risks to humans. Danish industry officials added that, as new data and knowledge arise, it is appropriate to reevaluate the measures taken to reduce risks. We have previously reported that the EU made other food safety decisions based on the precautionary principle, including decisions about inspecting imports of live animal and animal products, such as meat, milk, and fish. According to Danish government officials, Denmark has implemented two additional types of regulations regarding antibiotic use in food animals. First, Denmark has increased government oversight of veterinarians and producers. For example, in 1995, Denmark limited the amount that veterinarians could profit on sales of antibiotics. Then, in 2005, Denmark implemented policies requiring biannual audits of veterinarians who serve the swine industry, which Danish government officials said uses about 80 percent of all food animal antibiotics in Denmark. Government officials said these audits increase veterinarians’ awareness of their antibiotic prescription patterns. In 2007 the audits were expanded to cover all food animal veterinarians. Most recently, in 2010, Denmark developed a new system—called the yellow card initiative—which sets regulatory limits on antibiotic use based on the size of swine farms. Swine farms exceeding their regulatory limit are subject to increased monitoring by government officials, which they must pay for. Danish government officials explained that the yellow card initiative is different from their past oversight efforts in that it targets producers rather than veterinarians. Second, according to Danish government officials, Denmark developed a policy to reduce veterinary use of antibiotics classified as critically important to human medicine by WHO, which like FDA, has a ranking of such antibiotics. For example, in 2002 Denmark limited veterinary prescriptions of fluoroquinolones to cases in which testing showed that no other antibiotic would be effective at treating the disease. In addition, veterinarians prescribing fluoroquinolones to food animals would need to notify government regulatory officials. U.S. producers face different animal health challenges and regulatory requirements than producers in the EU and Denmark, making it difficult to determine how effectively similar policies could be implemented in the United States. Specifically, industry officials in Denmark explained that several diseases that affect producers in the United States are no longer active in Denmark. For example, broiler chicken producers in Denmark spent many years improving their biosecurity and successfully eradicated Salmonella, which can cause disease both in broiler chickens and in humans, and Danish cattle producers do not have to worry about brucellosis, which has not been seen in Denmark in decades. Similarly, the regulatory environment in the EU differs from that in the United States. For example, EU countries develop and implement policies using the precautionary principle. In addition, the EU and Denmark both require prescriptions for the use of most antibiotics in animals, but the United States requires them in certain limited circumstances. Officials from HHS and USDA said they are aware of other countries’ efforts to regulate antibiotic use in food animals and participate in international conferences and meetings addressing these issues. Based on the experiences in the EU and Denmark, there are several lessons that may be useful for U.S. government officials and producers. According to Danish government officials, Denmark’s antibiotic use data are detailed enough to allow the country to track trends in use and monitor the effects of their policies. Specifically, data show that antibiotic use in food animals declined from 1994 to 1999, but then it increased modestly from 1999 to 2009, while remaining below 1994 levels (see fig. 3). The decline coincides with the start of the changes to government policies on growth promotion and veterinarian sales profits. Danish industry and government officials noted some of the increase in antibiotic use over the last decade may be in response to disease outbreaks on swine farms. Danish government officials also mentioned, however, that the government instituted the 2010 yellow card initiative to reverse the recent increase in antibiotic use. According to these officials, antibiotic use in pig production fell 25 percent from June 2010 to June 2011 in response to the implementation of the yellow card initiative. According to Danish officials, Danish data on antibiotic resistance in food animals and retail meat show reductions in resistance after policy changes in most instances. Specifically, Danish government officials have tracked resistance to antibiotics banned for growth promotion among Enterococcus bacteria since the mid-1990s. Enterococcus are commonly found in the intestinal tract of humans and food animals, making them relatively easy to track over time, though they rarely cause disease. Officials said that the percentage of Enterococcus from food animals that are resistant to antibiotics banned for growth promotion has decreased since the bans were implemented. Officials also mentioned declines in resistance among Campylobacter bacteria (which can cause foodborne illness in humans) from food animals and retail meat. For example, officials said that resistance to the critically important class of drugs called macrolides has decreased in Campylobacter bacteria from swine. However, Danish industry and government officials cautioned that the association between antibiotic use and resistance is not straightforward. For example, despite restrictions on veterinary use of the critically important fluoroquinolone antibiotics since 2002, Danish resistance data have not shown a decrease in fluoroquinolone-resistant bacteria from food animals. Danish industry officials explained that restrictions on fluoroquinolone use in swine were implemented before fluoroquinolone resistance became pronounced in Denmark and that current rates of fluoroquinolone-resistant Salmonella in Danish pork are lower than for pork imported into Denmark. Danish officials told us that Denmark’s resistance data have not shown a decrease in antibiotic resistance in humans after implementation of the various Danish policies, except for a few limited examples. Specifically, officials said that the prevalence of vancomycin-resistant Enterococcus faecium from humans has decreased since avoparcin was banned for use in animals in 1995. Resistance has been tracked for other types of bacteria and antibiotics, but similar declines have not been seen. Danish government officials explained that, in addition to antibiotic use in food animals, there are other important contributors to antibiotic resistance in humans, including human antibiotic use, consumption of imported meat (which may contain more antibiotic-resistant bacteria than Danish meat), and acquisition of resistant bacteria while traveling. Danish officials told us their data collection systems are not designed to gather information about whether human deaths from antibiotic resistance have fallen after the implementation of risk management policies. Officials mentioned a challenge to this type of data collection is that “antibiotic resistance” is not listed on death certificates as the cause of death; generally, as in the United States, the cause of death would be listed as multiple organ failure, making it difficult to identify deaths caused by antibiotic-resistant infections. Denmark has also tracked the prevalence of bacteria that cause human foodborne illness on retail meat products, according to Danish industry officials. Producer organizations in the United States have expressed concerns that reductions in antibiotic use may lead to an increase in foodborne pathogens on meat, but industry officials in Denmark said that their data show no increase in the rates of these bacteria on meat products. These officials said, however, that several changes to management practices in slaughter plants may have helped ensure rates of foodborne pathogens on meat remained low. For example, these officials said Danish slaughter plants now use a flash-freezing technique—called blast chilling—that freezes the outer layer of an animal carcass, reducing the number of bacteria on the meat and even killing most Campylobacter. Danish producers and veterinary officials noted that the policies were easier for poultry producers to implement than for swine producers. Poultry producers had made changes to their production practices throughout the 1990s to eradicate Salmonella from their flocks, and these practices also helped maintain flock health without routine antibiotic use. In contrast, swine producers faced difficulties weaning piglets without antibiotics, reporting both an increase in mortality and a reduction in daily weight gain shortly after the ban. However, Danish industry officials explained that swine producers implemented multiple changes to production practices that enabled them to comply with the ban. These production practices included improved genetic selection, later weaning, improved diet, increased space per piglet, and improved flooring. Industry officials explained that such changes in production practices did have real costs to the industry. For example, weaning piglets later increases the time between litters and reduces the overall number of piglets produced annually. Despite these additional costs, however, Danish industry officials expressed pride in their ability to produce high-quality meat products while ensuring that they do not contribute unduly to the problem of antibiotic resistance. EU officials told us that they rely on member states to collect data on antibiotic use. As of September 2010, 10 countries in Europe collected data on sales of antibiotics used in food animals, and 5 of these countries collected species-specific data. In addition, 12 other countries have recently started or planned to begin collecting antibiotic sales data. Among countries that currently collect use data, these data are collected using different methods, which complicates comparing them across countries. EU officials identified several challenges to collecting information about antibiotic use throughout the EU. Specifically, identifying sources of detailed information about antibiotic use is difficult because EU countries have different distribution systems for veterinary medicines and therefore collect this information in varying ways. For example, in Denmark, such data are collected from veterinary pharmacies, but not all EU countries require animal antibiotics to be dispensed through pharmacies. In addition, EU countries vary in the extent to which veterinary prescriptions are monitored electronically, making it difficult to track prescriptions consistently throughout the EU. Despite these challenges, EU officials emphasized the importance of gathering data on antibiotic use in food animals for two reasons. First, they noted that tracking antibiotic use data allows governments to evaluate the effects of their risk management policies. Second, they mentioned that data on both antibiotic use and antibiotic resistance are needed in order to fully understand how use in animals is related to resistance in humans. Given the importance of collecting data, the EU has begun a pilot project to collect comparable antibiotic use data throughout the EU. The first phase will use a standard instrument to collect, harmonize, and analyze data on sales of veterinary antibiotics from countries that agree to participate. EU officials said that a report on sales of veterinary medicines, covering nine European countries, will be available in September 2011. EU officials said that subsequent phases will include more detailed data about species and purpose of use. They emphasized the importance of going beyond bulk sales data, noting that it is necessary to report antibiotic use in the context of the number of animals being treated or the pounds of meat produced, since it can allow for comparisons between EU countries as well as comparisons to human antibiotic use. EU officials said that the Danish system uses this type of data collection, and that WHO is working on developing guidance for how to create such data collection systems. For resistance data, EU officials told us that the EU has been collecting information from numerous member countries and working to improve the comparability of the data between countries. In 2006, the EU produced its first report for data gathered in 2004, collating information from 26 individual countries. However, EU officials said that resistance data cannot currently be compared across countries or aggregated to provide conclusions about the entire EU, though officials are in the process of developing a report that will provide EU-wide information. Instead, officials pointed to trends identified in particular member countries. For example, officials noted a decrease in resistance in Enterococcus from broiler chickens after avoparcin was banned for growth promotion uses in Germany, the Netherlands, and Italy. Officials also mentioned similar declines in resistance among Enterococcus from healthy humans in Germany and the Netherlands. Moreover, in addition to their data collection efforts on antibiotic use in food animals and antibiotic resistance in humans, meat, and food animals, the EU also conducts periodic baseline surveys to determine the prevalence of particular drug-resistant bacteria throughout all countries in the EU. EU officials said these baseline studies provide information that is comparable across countries. EU officials explained that EU countries are required to participate in these studies, which usually last 1 year and are used to set reduction targets for regulatory programs or to develop risk management measures. For example, in 2008 the EU conducted a prevalence study of MRSA in swine herds. It determined that the prevalence varied dramatically between member countries—it was found in more than 50 percent of swine herds in Spain, but in eight other EU countries there were no detections. According to Danish government and industry officials we interviewed, the Danish government does not conduct research on alternatives to antibiotic use. Both industry and government officials agreed that it should be government’s role to set regulatory policy and industry’s role to conduct research on how to meet regulatory goals. The Danish Agriculture and Food Council—an industry organization representing producers of a variety of meat and agricultural products—has funded several studies examining alternatives to growth promotion antibiotics. For example, one such study examined the economics of five types of products that had the potential to improve feed efficiency in swine without leading to antibiotic resistance and found that few products were both economical for farmers and successful in improving feed efficiency. EU officials also reported that at the EU-level government does not conduct a significant amount of research related to alternatives to antibiotics. They noted, however, that the EU has been trying to incentivize private industry to develop alternatives in other ways. For example, EU officials have tried to spur pharmaceutical companies to develop products to improve feed efficiency and growth by lengthening patents on such products. EU officials said that this results in a reduction in competition from generic manufacturers and has led to more than 300 applications for new feed additive products. Antibiotic resistance is a growing public health problem worldwide, and any use of antibiotics—in humans or animals—can lead to the development of resistance. In 2001, USDA and HHS agencies took steps to coordinate their actions on surveillance, prevention and control of resistance, research, and product development through the 2001 interagency plan. The surveillance focus area of this plan includes action items related to improving efforts to monitor both antibiotic use in food animals, as well as antibiotic resistance in food animals and in retail meat. According to WHO, populations sampled for surveillance purposes should normally be representative of the total population—in this case, food animals and retail meat in the United States. Since 2001, however, USDA and HHS agencies have made limited progress in improving data collection on antibiotic use and resistance. For example, although FDA has a new effort to collect data on antibiotics sold for use in food animals, these data lack crucial details, such as the species in which the antibiotics are used and the purpose for their use. The 2001 interagency plan states such data are essential for interpreting trends and variations in rates of resistance, improving the understanding of the relationship between antibiotic use and resistance, and identifying interventions to prevent and control resistance. In addition, two USDA agencies collect data on antibiotic use from food animal producers, but data from these surveys provide only a snapshot of antibiotic use practices and cannot be used to examine trends. Collecting data on antibiotic use in food animals can be challenging and costly, but without an approach to collecting more detailed data, USDA and HHS cannot track the effectiveness of policies they undertake to curb resistance. Indeed, FDA currently does not have a plan to measure the effectiveness of its voluntary strategy to reduce food animal use of antibiotics that are medically important to humans. Although there are challenges to collecting detailed data on antibiotic use, efforts are under way in the EU to begin collecting such data. For data on antibiotic resistance, HHS and USDA agencies have leveraged existing programs to collect samples of bacteria, but the resulting data are not representative of antibiotic resistance in food animals and retail meat throughout the United States. According to the 2001 interagency plan, antibiotic resistance data will allow agencies to detect resistance trends and improve their understanding of the relationship between use and resistance. FDA is aware of the NARMS sampling limitations and has included a strategic goal of making NARMS sampling more representative and applicable to trend analysis in its draft 2011-2015 NARMS Strategic Plan. FDA officials mentioned several ways that NARMS sampling could be improved, such as discontinuing slaughter plant sampling in favor of an on-farm sampling program and increasing the number of states participating in the retail meat program. USDA and HHS have also undertaken some research related to developing alternatives to current antibiotic use practices. However, the extent of these research efforts is unclear, as neither USDA nor HHS has assessed its research efforts to determine the progress made toward the related action item in the 2001 interagency plan. In addition, officials from most of the veterinary and several public health organizations we spoke with said that the federal government should make greater efforts to coordinate this research with the food animal industry. Without an assessment of past research efforts and coordination with industry, USDA and HHS may be limited in their ability to identify gaps where additional research is needed. In addition, USDA and HHS managers may not have the critical information they need to make decisions about future research efforts. Focus on tracking progress and making sound decisions about future research is particularly important in light of the fiscal pressures currently facing the federal government. Nevertheless, the draft 2010 interagency plan includes an action item on researching alternatives, but it does not identify steps the agencies intend to take to do so. Similarly, USDA and HHS had sought to educate producers and veterinarians about appropriate antibiotic use but did not assess their efforts. The one remaining education activity, however, is a $70,400 USDA training module on antibiotic resistance for veterinarians, which will be completed in 2012, after which there are no plans to develop new education activities. We are making the following three recommendations:  To track the effectiveness of policies to curb antibiotic resistance, including FDA’s voluntary strategy designed to reduce antibiotic use in food animals and to address action items in the surveillance focus area of the 2001 interagency plan, we recommend the Secretaries of Agriculture and Health and Human Services direct agencies to, consistent with their existing authorities, (1) identify potential approaches for collecting detailed data on antibiotic use in food animals, including the species in which antibiotics are used and the purpose for their use, as well as the costs, time frames, and potential trade-offs associated with each approach; (2) collaborate with industry to select the best approach; (3) seek any resources necessary to implement the approach; and (4) use the data to assess the effectiveness of policies to curb antibiotic resistance.  To enhance surveillance of antibiotic-resistant bacteria in food animals, we recommend that the Secretaries of Agriculture and Health and Human Services direct agencies to, consistent with their existing authorities, modify NARMS sampling to make the data more representative of antibiotic resistance in food animals and retail meat throughout the United States.  To better focus future federal research efforts on alternatives to current antibiotic use practices, we recommend that the Secretaries of Agriculture and Health and Human Services direct agencies to (1) assess previous research efforts on alternatives and identify gaps where additional research is needed, in collaboration with the animal production industry, and (2) specify steps in the draft 2010 interagency plan that agencies will take to fill those gaps. We provided the Departments of Agriculture and Health and Human Services a draft of this report for review and comment. Both departments agreed with our recommendations and provided written comments on the draft, which are summarized below and appear in their entirety in appendixes VII and VIII, respectively, of this report. The departments also provided technical comments, which we incorporated as appropriate. In its comments, USDA agreed with our recommendations. In response to our recommendation on collecting antibiotic use data, USDA noted that the department has devised strategies to collect detailed information on antibiotic use in food animals, as documented in “A USDA Plan to Address Antimicrobial Resistance.” Our report discusses many of the ongoing USDA activities described in the document, including NAHMS, ARMS, and NARMS. In commenting on our recommendation to collect more representative resistance data, USDA acknowledged that sampling for antibiotic resistant bacteria in food animals is not currently conducted on a nationally representative population, but also stated that NARMS data can still be used to examine general trends. We continue to believe that the nonrandom sampling method used for food animals in NARMS results in data that are not representative of food animals across the country and cannot be used for trend analysis. Moreover, as our report states, the NARMS program has prioritized modifying animal sampling to overcome its current biases, and both FDA and USDA have identified efforts that could be used to improve NARMS food animal sampling. In its letter, USDA identified several such efforts; we had included several of these in the draft report, and we modified the final version to include the remaining effort. In its comments, HHS also agreed with our recommendations, but stated that FDA has made substantial progress and taken an active and deliberative role in addressing the controversial and complex issue of antibiotic use in food animals. We acknowledge that FDA has taken many actions, most of which are discussed in the report. However, as our report states, since the 2001 interagency plan, USDA and HHS agencies have made limited progress in improving data collection on antibiotic use and resistance. Specifically, as we noted in our report, FDA’s data on sales of antibiotics for animal use do not include information on the species in which antibiotics are used or the purpose for their use, which, for example, prevents agencies from interpreting trends and variations in rates of resistance. Similarly, as our report states, data on antibiotic resistance from food animals are not representative and cannot be used for trend analysis—even though the 2001 interagency plan identified detecting resistance trends as an important part of monitoring for antibiotic resistance. In commenting on our recommendation regarding antibiotic use data collection, FDA recognized that having more detailed antibiotic use data would benefit its overall effort to assure the judicious use of antibiotics. FDA also noted that it is exploring potential approaches for obtaining more detailed information and that it plans to coordinate with USDA in that effort. We modified our report to include this information. In addition, regarding our findings on FDA’s resistance data from retail meat, FDA stated that it does not believe samples need to be statistically representative of the entire United States to serve as indicators of U.S. retail meat. We modified our report to better reflect FDA’s position, but as our report states, the FDA Science Advisory Board’s 2007 review of data on antibiotic resistance in retail meat found that the lack of a national sampling strategy limits a broader interpretation of NARMS data. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the appropriate congressional committees, Secretaries of Agriculture and Health and Human Services, and other interested parties. In addition, this report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-3841 or shamesl@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IX. The objectives of our review were to determine (1) the extent to which federal agencies have collected data on antibiotic use and resistance in food animals; (2) the actions the Food and Drug Administration (FDA) has taken to mitigate the risk of antibiotic resistance in humans as a result of antibiotic use in food animals; (3) the extent to which federal agencies have conducted research on alternatives to current antibiotic use practices and educated producers and veterinarians about appropriate antibiotic use; and (4) what actions the European Union (EU) and an EU member country, Denmark, have taken to regulate antibiotic use in food animals and what lessons, if any, have been learned. To address the first three objectives of our study, we reviewed federal laws, regulations, policies, and guidance; federal plans about antibiotic resistance; agency documents related to data collection efforts on antibiotic use and resistance; and documents from international organizations and other countries related to surveillance of animal antibiotic use and resistance. In particular, we reviewed the Food, Conservation, and Energy Act of 2008 (2008 Farm Bill), as well as laws related to FDA’s oversight of animal antibiotics, including the Federal Food, Drug, and Cosmetic Act, the Animal Drug Availability Act of 1996, the Animal Drug User Fee Act of 2003. We also reviewed regulations and guidance implementing FDA’s authorities, including Evaluating the Safety of Antimicrobial New Animal Drugs with Regard to Their Microbiological Effects on Bacteria of Human Health Concern (Guidance for Industry #152), and The Judicious Use of Medically Important Antimicrobial Drugs in Food-Producing Animals (draft Guidance for Industry #209). In addition, we reviewed the 2001 Interagency Public Health Action Plan to Combat Antimicrobial Resistance, the draft 2010 Interagency Public Health Action Plan to Combat Antimicrobial Resistance, and agencies’ annual updates of activities they completed related to these plans. We also reviewed agency documents related to FDA’s sales data, the National Animal Health Monitoring System (NAHMS), the Agricultural Resource Management Survey (ARMS), the National Antimicrobial Resistance Monitoring System (NARMS), and the now-defunct pilot Collaboration on Animal Health and Food Safety Epidemiology (CAHFSE). Internationally, we reviewed documents from surveillance systems in Canada and Denmark, including reports about the Canadian Integrated Program on Antimicrobial Resistance Surveillance (CIPARS) and the Danish Antimicrobial Resistance Monitoring and Research Programme (DANMAP). In addition, we reviewed the World Health Organization’s guidance on developing surveillance systems for antibiotic resistance related to food animal antibiotic use. To discuss topics related to the first three objectives, we also conducted interviews with officials at the Department of Health and Human Services’ (HHS) Centers for Disease Control and Prevention (CDC), FDA, and the National Institutes of Health (NIH) and U.S. Department of Agriculture (USDA) agency officials at the Animal and Plant Health Inspection Service (APHIS), the Agricultural Research Service (ARS), the Economic Research Service (ERS), the Food Safety and Inspection Service (FSIS), and the National Institute of Food and Agriculture (NIFA). We also interviewed an official representing CIPARS to discuss the program’s efforts to monitor antibiotic use and resistance in animals across Canada, the challenges it faces, and how the program may relate to current and future data collection efforts in the United States. We also conducted site visits with conventional and alternative (either organic or antibiotic-free) producers of poultry, cattle, swine, and dairy products in Delaware, Georgia, Iowa, Kansas, Minnesota, and Wisconsin to obtain a better understanding of production practices and the types of antibiotic use data available at the farm level. During these site visits, we spoke with producers, veterinarians, academic researchers, and extension agents involved with food animal production. We selected these commodity groups because they represent the top four animal products in the United States. We selected our site visit locations based on the accessibility of production facilities of different sizes—we visited both small and large facilities; including states that are among the largest producers of each commodity in our scope of study; and proximity to Washington, D.C., and the USDA NARMS laboratory in Georgia. These sites were selected using a nonprobability sample and the findings from those visits cannot be generalized to other producers. Based on issues identified by reviewing documents and interviewing federal, state, and local officials, we developed a questionnaire on the use of antibiotics in animals and resistance. The questionnaire gathered organizations’ perspectives on a range of topics including the extent to which federal data collection programs support the action items identified by federal agencies in the 2001 interagency plan; what actions, if any, FDA or other federal agencies should take to implement the two principles FDA outlined in draft Guidance for Industry #209 and how such implementation may affect antibiotic use in food animals; and what role, if any, the federal government should have in conducting research on alternatives to current antibiotic use practices and educating producers and veterinarians. We conducted a pretest of the questionnaire and made appropriate changes based on the pretest. In addition to developing the questionnaire, we identified 11 organizations involved with the issue of antibiotic use in food animals and antibiotic resistance. We selected these organizations because of their expertise in topics surrounding antibiotic use in animals and resistance based on whether they have been actively involved in this issue within the past 5 years, including through testimonies to Congress, in-depth public discussions, or published research; and to provide representation across producer organizations that represent the major commodities, in addition to pharmaceutical and public health organizations. The selected organizations are a nonprobability sample, and their responses are not generalizable. The selected organizations were: National Cattleman’s Beef Association, National Milk Producers’ Federation, National Pork Producers Council, National Chicken Council, Animal Health Institute, Alliance for the Prudent Use of Antibiotics, Center for Science in the Public Interest, Infectious Diseases Society of America, Keep Antibiotics Working, PEW Campaign on Human Health and Industrial Farming, and Union of Concerned Scientists. We administered the questionnaires through structured interviews with representatives from the 11 national organizations, who spoke on behalf of their members, either via phone or in-person. All 11 organizations agreed to participate in these structured interviews. To identify trends in responses, we qualitatively analyzed the open-ended responses from the interviews to provide insight into organizations’ views on the issues identified in the questionnaire. We also conducted structured interviews with representatives from five national veterinary organizations, who spoke on behalf of their members, to discuss their views on federal research efforts on alternatives and federal efforts to educate producers and veterinarians about appropriate use. The questionnaire covered a range of topics including federal progress in both of these areas since 2001 and actions the federal government can take to improve future efforts in these areas. We contacted five veterinary organizations to request their participation, selecting these organizations to include the largest U.S. veterinary organization—the American Veterinary Medical Association—as well as a veterinary organization representing each of the major commodities in our review—American Association of Avian Pathologists, American Association of Bovine Practitioners, American Association of Swine Veterinarians, and the Academy of Veterinary Consultants. We distributed the questionnaire to the five organizations electronically and administered the questionnaires through structured interviews with each organization via phone or in person. All five veterinary organizations agreed to participate in these structured interviews. To identify trends in responses, we qualitatively analyzed the open-ended responses from the interviews to provide insight into organizations’ views on the issues identified in the questionnaire. Although we sought to include a variety of organizations with perspectives about antibiotic use and resistance, the views of organizations consulted should not be considered to represent all perspectives about these issues and are not generalizable. To describe actions the EU and Denmark have taken to regulate antibiotic use in food animals and potential lessons that have been learned from these actions, we reviewed documents, spoke with EU and Danish government and industry officials, and visited producers. We selected the EU and Denmark because they implemented bans on growth promotion uses of antibiotics in 2006 and 2000, respectively, which allows for a review of the effects of these policies in the years since. In addition, Denmark’s experience with regulating antibiotic use has been well- documented in government-collected data that provide insight into the effects of policy changes. For the EU, we reviewed documents describing EU Commission directives and regulations regarding antibiotic use in food animals, risk assessments related to antibiotic use in food animals, surveillance reports describing antibiotic resistance in the EU, and proposals for future data collection efforts on antibiotic use. In addition, we spoke with officials from the EU Directorates General for Health and Consumers, Agriculture and Rural Development, and Research and Innovation. We also spoke with an official from the European Food Safety Agency regarding their surveillance reports describing antibiotic resistance in the EU. Finally, we interviewed the following organizations that interact with the EU on behalf of their members regarding animal antibiotic use: Federation of Veterinarians of Europe, which represents veterinarians throughout the EU, and the International Federation for Animal Health, which represents pharmaceutical companies who manufacture animal health products. We did not independently verify statements of EU law. For Denmark, we reviewed documents describing Danish laws and regulations regarding animal antibiotic use and government regulation of veterinarians, surveillance reports describing antibiotic use and antibiotic resistance in Denmark, and published studies examining Denmark’s experience with regulating antibiotic use. In addition, we spoke with officials at the Danish Veterinary and Food Administration and DANMAP. We also spoke with officials at the Danish Agriculture and Food Council, which represents producers in Denmark, to learn about how Danish policies have affected producers. Finally, we conducted site visits and interviewed Danish producers and veterinarians at a poultry and a swine facility in Denmark to learn about current methods of production and how these producers have implemented Danish policies. These sites were selected based on convenience and the findings from those visits cannot be generalized to other producers. We did not independently verify statements of Danish law. We conducted this performance audit from August 2010 to September 2011, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Some producers raise animals using alternative modes of production. One such alternative is organic production, for which USDA’s National Organic Program (NOP) develops, implements, and administers national standards. To comply with NOP standards, organically produced animals cannot be treated with antibiotics. According to USDA, organic farming has become one of the fastest-growing segments of U.S. agriculture, and consumer demand for organically produced goods has shown double- digit growth for well over a decade, providing market incentives for U.S. farmers across a broad range of commodities. According to recent industry statistics, organic sales account for over 3 percent of total U.S. food sales. Fruits and vegetables account for about 37 percent of U.S. organic food sales, while dairy and food animals (including meat, fish, and poultry) account for about 16 and 3 percent, respectively, of U.S. organic food sales. According to the Organic Trade Association, transitioning from conventional to organic production can take several years, because producers must adopt certain management practices to qualify for organic certification. The NOP standards apply to animals used for meat, milk, eggs, and other animal products represented as organically produced. Some of the NOP livestock standards include the following:  Animals for slaughter must be raised under organic management from the last third of gestation, or no later than the second day of life for poultry.  Producers generally must provide a total feed ration composed of agricultural products, but they may also provide allowed vitamin and mineral supplements.  Traditional livestock have transition periods for converting to organic. For example, producers may convert an entire distinct dairy herd to organic production by providing 80 percent organically produced feed for 9 months, followed by 3 months of 100 percent organically produced feed. If the farm did not convert an entire distinct herd, new animals added must be raised using organic methods for at least 1 year before the milk can be sold as organic.  Organically raised animals may not be given hormones to promote growth, or antibiotics for any reason.  All organically raised animals must have access to the outdoors, including access to pasture for ruminants, such as cattle. They may be temporarily confined only for specified reasons, including reasons of health, safety, the animal’s stage of production, or to protect soil or water quality.  A USDA-approved certifier ensures that organic producers are following all of the rules necessary to meet NOP standards, which includes maintaining data that preserve the identity of all organically managed animals and edible and nonedible animal products produced on the operation. One producer we visited told us that his farm began the transition from a conventional farm in 1995 and became a grass-fed beef and certified organic farm in 2006 (see fig. 4). This producer also said that the transition experience was economically challenging. Specifically, during this conversion the farm stopped bringing in outside animals and changed confinement and feed practices. Through such changes, this producer said that, overall, the animals are healthier and the farm has increased marketing opportunities, which he feels outweighs the costs. In addition to organic, there are other alternative modes of production. For example, FSIS has a “raised without antibiotics” production label for red meat and poultry. Before FSIS will approve such a label, producers must provide the agency with sufficient documentation that demonstrates animals were raised without antibiotics. Other commonly approved FSIS poultry and meat production labels include “natural” and “free range,” though these labels do not limit the use of antibiotics (see fig. 5). Some conventional and alternative producers we visited told us that animals produced without antibiotics typically grow at slower rates and tend to weigh less at market, requiring producers to charge higher premiums to cover these additional production costs. Producers raising animals without antibiotics typically have to take greater preventative measures, such as changes in husbandry practices, in order to reduce chances of illness. These changes in husbandry practices may include providing hay bedding for newly birthed calves and mother cows, selecting and breeding animals with disease resistance, and allowing greater access outdoors and space per animal. When animals do become sick, alternative disease treatments depend on the animal and illness. For example, cows may be treated with sea salt and a patch for pink eye and splints for broken legs. Still, antibiotics may need to be used as a last resort and, in such cases, these animals are sold to the conventional market, creating an economic loss for the producer. Tables 7 and 8 provide examples of the data collected by the Food and Drug Administration as required by the Animal Drug User Fee Amendments of 2008 (ADUFA). The objectives of the Danish Integrated Antimicrobial Resistance Monitoring and Research Program (DANMAP) are to monitor the consumption of antibiotics for food animals and humans; monitor the occurrence of antibiotic resistance in bacteria from food animals, food of animal origin, and humans; study associations between antibiotic use and resistance; and identify routes of transmission and areas for further research studies. Table 9 shows the types of data gathered about antibiotic use and resistance in Denmark and the sources of these data. Grantee(s) (if applicable) Project year(s) Grantee(s) (if applicable) Project year(s) Grantee(s) (if applicable) Project year(s) Researching methods and strategies to reduce antibiotic resistance transmission along the food chain This figure is based on fiscal year 2010 funding levels, and is similar to funding for each year of the project. In 2010, NIFA was allocated up to $4 million to award two competitive grants related to antibiotic resistance and use (awarded to Kansas State University and Washington State University). NIFA expects to make decisions about similar grants for fiscal year 2011 in September, and to release award announcements in fiscal year 2012. In addition to the individual named above, Mary Denigan-Macauley, Assistant Director; Kevin Bray; Antoine Clark; Julia Coulter; Cindy Gilbert; Janice Poling; Katherine Raheb; Leigh Ann Sennette; Ben Shouse; and Ashley Vaughan made key contributions to this report. Antibiotic Resistance: Data Gaps Will Remain Despite HHS Taking Steps to Improve Monitoring. GAO-11-406. Washington, D.C.: June 1, 2011. Federal Food Safety Oversight: Food Safety Working Group Is a Positive First Step but Governmentwide Planning Is Needed to Address Fragmentation. GAO-11-289. Washington, D.C.: March 18, 2011. High Risk Series: An Update. GAO-11-278. Washington, D.C.: February 2011. Veterinarian Workforce: Actions are Needed to Ensure Sufficient Capacity for Protecting Public and Animal Health. GAO-09-178. Washington, D.C.: February 4, 2009. Food Safety: Selected Countries’ Systems Can Offer Insights into Ensuring Import Safety and Responding to Foodborne Illness. GAO-08-794. Washington, D.C.: June 10, 2008. Avian Influenza: USDA Has Taken Steps to Prepare for Outbreaks, but Better Planning Could Improve Response. GAO-07-652. Washington, D.C.: June 11, 2007. Antibiotic Resistance: Federal Agencies Need to Better Focus Efforts to Address Risk to Humans from Antibiotic Use in Animals. GAO-04-490. Washington, D.C.: April 22, 2004. Food Safety: The Agricultural Use of Antibiotics and Its Implications for Human Health. GAO/RCED-99-74. Washington, D.C.: April 28, 1999. Executive Guide: Effectively Implementing the Government Performance and Results Act. GAO/GGD-96-118. Washington, D.C.: June 1996.
Antibiotics have saved millions of lives, but antibiotic use in food animals contributes to the emergence of resistant bacteria that may affect humans. The Departments of Health and Human Services (HHS) and Agriculture (USDA) are primarily responsible for ensuring food safety. GAO reviewed the issue in 2004 and recommended improved data collection and risk assessment. GAO was asked to examine the (1) extent to which agencies have collected data on antibiotic use and resistance in animals, (2) actions HHS's Food and Drug Administration (FDA) took to mitigate the risk of antibiotic resistance in humans as a result of use in animals, (3) extent to which agencies have researched alternatives to current use practices and educated producers and veterinarians about appropriate use, and (4) actions the European Union (EU) and an EU member country, Denmark, have taken to regulate use in animals and lessons that have been learned. GAO analyzed documents, interviewed officials from national organizations, and visited producers in five states and Denmark.. HHS and USDA have collected some data on antibiotic use in food animals and on resistant bacteria in animals and retail meat. However, these data lack crucial details necessary to examine trends and understand the relationship between use and resistance. For example, since GAO's 2004 report, FDA began collecting data from drug companies on antibiotics sold for use in food animals, but the data do not show what species antibiotics are used in or the purpose of their use, such as for treating disease or improving animals' growth rates. Also, although USDA agencies continue to collect use data through existing surveys of producers, data from these surveys provide only a snapshot of antibiotic use practices. In addition, agencies' data on resistance are not representative of food animals and retail meat across the nation and, in some cases, because of a change in sampling method, have become less representative since GAO's 2004 report. Without detailed use data and representative resistance data, agencies cannot examine trends and understand the relationship between use and resistance. FDA implemented a process to mitigate the risk of new animal antibiotics leading to resistance in humans, which involves the assessment of factors such as the probability that antibiotic use in food animals would give rise to resistant bacteria in the animals, but it faces challenges mitigating risk from antibiotics approved before FDA issued guidance in 2003. FDA officials told GAO that conducting postapproval risk assessments for each of the antibiotics approved prior to 2003 would be prohibitively resource intensive, and that pursuing this approach could further delay progress. Instead, FDA proposed a voluntary strategy in 2010 that involves FDA working with drug companies to limit approved uses of antibiotics and increasing veterinary supervision of use. However, FDA does not collect the antibiotic use data, including the purpose of use, needed to measure the strategy's effectiveness. HHS and USDA have taken some steps to research alternatives to current antibiotic use practices and educate producers and veterinarians on appropriate use of antibiotics. However, the extent of these efforts is unclear because the agencies have not assessed their effectiveness. Without an assessment of past efforts, the agencies may be limited in their ability to identify gaps where additional research is needed. Except for one $70,400 USDA project, all other federal education programs have ended. Since 1995, the EU, including Denmark, banned the use of antibiotics to promote growth in animals, among other actions. Some of their experiences may offer lessons for the United States. For example, in Denmark, antibiotic use in animals initially decreased following a series of policy changes. The prevalence of resistant bacteria declined in food animals and retail meat in many instances, but a decline in humans has only occasionally been documented. Denmark's data on use and resistance helped officials track the effects of its policies and take action to reverse unwanted trends. The EU faces difficulty collecting data that can be compared across countries, but officials there said such data are needed to fully understand how use in animals may lead to resistance in humans. GAO recommends that HHS and USDA (1) identify and evaluate approaches to collecting detailed data on antibiotic use in animals and use these data to evaluate FDA's voluntary strategy, (2) collect more representative data on resistance, and (3) assess previous efforts on alternatives to identify where more research is needed. HHS and USDA agreed with GAO's recommendations.
In fiscal year 2001, the latest period for which data are available, the Minerals Management Service reported that it collected about $5.2 billion in gas royalties and about $2.3 billion in oil royalties. There are more than 20,000 producing federal leases located in the continental United States and Alaska and more than 2,000 producing federal leases in the waters off the shores of the United States. Despite the larger number of onshore leases, offshore leases (most of which are in the Gulf of Mexico) account for 81 percent of all federal oil and gas royalty payments. In general, royalty rates for onshore leases are 12-1/2 percent of the value of the oil and gas produced, whereas royalty rates for most offshore leases are 16- 2/3 percent. The government generally distributes about half of the royalty payments collected onshore back to the states in which the leases are located. The government also shares with the coastal states a smaller portion of the royalty payments collected from offshore leases located within 3 miles of the coast, known as the 8(g) zone. However, the government does not share royalties from offshore leases beyond the 8(g) zone, where the majority of offshore oil and gas is produced. The collecting, reporting, and auditing of cash royalty payments have been challenging for MMS. MMS relies upon royalty payors to self-report the amount of oil and gas they produce, the value of this oil and gas, and the cost of transportation and processing that they deduct from royalty payments. There are concerns about the accuracy and reliability of these data. Although MMS is responsible for auditing these data, with more than 22,000 producing leases and often several companies paying royalties on each lease each month, the auditing becomes a formidable task. In addition, there has been considerable disagreement between industry and MMS over the value of the oil and gas produced and the cost of transportation and processing deductions, leading to time-consuming and costly appeals and litigation. While most companies that lease federal lands pay their royalties in cash, the federal government can instead take a portion of the oil and gas that these companies produce—known as “taking royalties in kind.” The Congress authorized royalties in kind under the Mineral Leasing Act of 1920 and under the Outer Continental Shelf Lands Act of 1953. Standard leases for the exploration of oil and gas on federal properties reserve the right for the federal government to take its royalties in kind. The Federal Managers’ Financial Integrity Act of 1982 directed federal agencies to develop management control for safeguarding resources and required GAO to prescribe standards for agencies to follow in establishing management control. Management control plays a significant role in helping managers achieve strategic and annual performance goals that are required under the Government Performance and Results Act of 1993. Management control consists of several components: (1) an environment that sets a positive and supportive attitude toward management control and conscientious management (control environment); (2) an assessment of the risks that an organization faces from both external and internal sources (risk assessment); (3) procedures, techniques, and mechanisms that enforce management’s directives (management control activities); (4) recording and communicating information to management and to others that need it within the organization (information and communication); and (5) monitoring the quality of performance over time (monitoring). From January 1995 through September 2001, MMS took 178 million barrels of oil and 213 billion cubic feet of gas in kind primarily for three purposes: (1) to provide small refiners with a stable source of crude oil, (2) to fill the Strategic Petroleum Reserve (SPR), and (3) to study alternatives to the traditional system of cash royalty payments. MMS sold the majority of the oil that it took in kind to small refineries under the Small Refiners Program—a long-standing program designed to assist small refiners that are having difficulty obtaining an adequate supply of crude oil. MMS also transferred substantial quantities of federal royalty oil to the SPR as a safeguard against disruptions in the nation’s supply of crude oil. MMS takes lesser quantities of oil and gas in kind under a series of pilot sales in Wyoming and the Gulf of Mexico to study alternatives to the traditional system of cash royalty payments. In doing so, MMS has been testing whether it can improve the administrative efficiency of royalty collections and whether it can sell the federal royalty oil and gas for at least as much as it would have collected from traditional cash royalty payments. From January 1995 through September 2001, MMS sold to small refiners about 143 million barrels of oil, or about 25 percent of the federal government’s royalty share of all oil produced on federal lands during this time period. The amounts of oil taken in kind each year for small refiners have ranged from about 10 to 40 percent of the total federal royalty oil, as shown in figure 1. These amounts were worth from $138 million to $588 million, as shown in figure 2. The majority of federal royalty oil sold to small refiners since 1995 was produced in the Gulf of Mexico. Other purposes for which MMS took oil in kind, such as for the Wyoming and Gulf pilots and the SPR, are also shown in figures 1 and 2. Under the Mineral Leasing Act, as amended by P.L. 79-506, if the Secretary of the Interior determines that there are insufficient supplies of crude oil available on the open market to refiners that do not have their own supply, the Secretary is required to give preference to these small refiners in selling federal royalty oil. Accordingly, the Secretary provides small refiners with a stable source of crude oil at equitable prices so that these small refiners can compete in areas dominated by integrated oil companies and large refiners. Although the Secretary has long held this authority, the Secretary conducted few sales prior to 1970 because of little interest from small refiners. The Secretary delegated the responsibility to administer small refiner sales to MMS shortly after its formation in 1982. After MMS assesses small refiners’ needs for crude oil, MMS identifies federal royalty oil to meet these needs, and then conducts sales. Often, more than one small refiner wanted to purchase the same oil, so MMS in recent years conducted a lottery to determine the purchaser. Prior to 2000, MMS relied upon the producer of the oil to report its sales value and subsequently billed the small refiner this amount plus an administrative fee to cover the costs of running the program. After billing the small refiners, however, MMS determined that the producers had understated the value of the oil, so MMS sent additional bills to the small refiners. These bills often surprised the small refiners, and in some cases, large bills threatened their financial solvency. Because small refiners were dropping out of the program owing to the uncertainty over the value of the oil, MMS changed its small refiner sales in 2000 from lottery-based sales to competitive auction-based sales. The bidders and MMS now agree to the price before receiving the oil, just as they do in sales of other federal royalty oil. The Congress established the Strategic Petroleum Reserve to provide emergency oil in the event of a disruption in petroleum supplies. The SPR consists of a series of underground salt caverns along the coastline of the Gulf of Mexico that can store up to 700 million barrels of oil. It is managed and maintained by the Department of Energy (DOE). Largely to reduce the federal deficit, the federal government withdrew and sold oil from the SPR in fiscal years 1996 and 1997. To replace the amounts withdrawn from the SPR, MMS assisted with the transfer of about 29 million barrels of federal royalty oil from the Gulf of Mexico to DOE in 1999 and 2000. This amount represented about 17 percent of the federal government’s royalty share of all oil produced on federal lands in each of these 2 years, as shown in figure 1. By filling the SPR, the federal government had forgone the receipt of royalty revenues that it would have otherwise collected in cash. The Office of Management and Budget in February 1999 estimated that the total cost of filling the SPR would be $370 million, but oil prices rose since then, and the total cost was probably higher. Refilling stopped in December 2000 but commenced again in April 2002 under presidential directive and is expected to continue into 2005. From April through July 2002, MMS assisted in transferring to DOE about 7.5 million barrels of oil, worth about $169 million. MMS plans to increase deliveries to DOE from 63,000 barrels per day in July 2002 to about 130,000 barrels per day in 2003. MMS began studying the use of federal royalty oil as an alternative to cash royalty payments through a series of pilot sales in Wyoming. Through nine consecutive sales that began in October 1998, MMS and the state of Wyoming collectively sold federal and state royalty oil. In doing so, MMS acquired information on how to group properties for sale and how to establish a price basis for bidding. Although the federal portion of these volumes far exceeded the state portion, we estimate that the federal oil that MMS sold during the 3-year period from October 1998 through September 2001 accounted for about 1 percent of the federal government’s royalty share of all oil produced on federal lands. MMS expanded its study of royalty oil to the Gulf of Mexico with two competitive sales, the first of which delivered oil to purchasers starting in November 2000. Unlike the pilots in Wyoming, the amount of federal royalty oil that MMS sold in the Gulf of Mexico reached significant quantities during the second pilot sale—about 32 times the amount of oil sold in Wyoming during the same 6- month period. We estimate that the federal royalty oil that MMS sold during this second sale, which commenced in October 2001 and ended in March 2002, might have accounted for about 20 percent of the federal government’s royalty share of all oil produced on federal lands during the term of the sale. MMS first began studying the taking of gas in kind by conducting a gas pilot in 1995. This pilot assessed the administrative efficiency and revenue impacts of taking gas in kind relative to cash royalty payments. MMS accepted about 6 percent of the federal royalty gas in the Gulf of Mexico and sold it through auctions for about $72.6 million. Although this pilot showed that MMS could execute the sale of royalty gas, MMS estimated that these sales resulted in about 6 percent less revenue than MMS would have received in cash royalty payments, or more than a $4 million loss. MMS attributed this loss primarily to unforeseen problems in securing transportation of the gas through pipelines and to industry’s volunteering the royalty gas for sale, rather than to MMS’s selecting this gas. MMS continued studying RIK and issued a report in 1997 that concluded that RIK sales could be administratively more efficient and could generate at least as much revenue as traditional cash royalty payments. MMS began testing these conclusions with a series of pilot sales in the Gulf of Mexico that began in December 1998. The gas that MMS sold during these pilot sales averaged about 10 percent of the federal government’s royalty share of all gas produced on federal lands from January 2000 through September 2001, as shown in figure 3. The annual revenues that MMS reported collecting from the sale of this federal royalty gas are illustrated in figure 4. MMS studied various methods of selling this royalty gas, including negotiating the sales price, paying gas marketers to aggregate smaller volumes of gas into larger volumes, and auctioning the gas. As a result of these pilot studies, MMS decided to sell federal royalty gas through auctions open to all buyers meeting minimum standards of credit worthiness. Management control is a necessary safeguard to protect against the risks of fraud, waste, abuse, and mismanagement. MMS has made progress in establishing some components of management control over its RIK Program, such as (1) identifying and mitigating the risks associated with oil and gas sales and (2) developing written procedures for these sales and for collecting and reporting revenues. However, MMS has yet to develop several key management control activities and does not plan to develop them until 2004, when it will consider the RIK Program to have changed from a pilot status to a fully operational status. Specifically, MMS has not clearly defined its strategic objectives, linked performance measures to these objectives, and collected the necessary information to monitor and evaluate the RIK Program. The Federal Managers’ Financial Integrity Act of 1982 directs federal agencies to develop management control for safeguarding resources against the risks of fraud, waste, abuse, and mismanagement. Management control is critical to ensure that revenues and expenditures from agency operations are recorded and accounted for properly and that financial and statistical reports are reliable. The act also directs us to issue standards for management control within the federal government. These standards provide broad criteria for agencies to use, in conjunction with guidance issued by the Office of Management and Budget. Management control includes (1) developing strategic objectives, (2) linking performance measures to these objectives, (3) collecting the necessary information to monitor and evaluate performance, (4) identifying and mitigating risks, and (5) developing written procedures and documenting compliance with these procedures. Management control also plays an important role in helping managers comply with the Government Performance and Results Act of 1993 (Results Act), which requires federal agencies to establish strategic goals, measure performance, and report on accomplishments. The Results Act shifts the focus of federal agencies away from traditional concerns, such as staffing and reporting on activities, toward achieving results. There is no more important element in results-oriented management than an agency’s strategic-planning process, and establishing formal strategic objectives can help clarify what the agency seeks to accomplish and can help unify the agency’s staff in achieving its goals. MMS has begun to establish management control over its RIK Program by addressing the risk that oil and gas sales will be unsuccessful, addressing inherent risks associated with the sale of oil and gas, and developing written procedures for various activities within the Royalty-in-Kind Program. These activities include conducting RIK sales, collecting revenues, and reporting on revenues. MMS also has made progress in documenting the results of its RIK sales. MMS has addressed the risk that RIK sales will be unsuccessful by ensuring that prior to these sales, certain conditions exist for the properties from which MMS will sell royalty oil and gas. In 1998, we identified the conditions necessary for successful oil and gas sales by surveying state governments, universities, and the Province of Alberta, which, at various times, had programs that took oil and gas in kind. We identified several conditions that made these programs feasible. In particular, these programs seemed successful if these entities had (1) relatively easy access to pipelines, (2) properties that produce relatively large volumes of oil or gas, (3) favorable arrangements for processing gas, and (4) expertise in marketing oil and gas. MMS has considered these conditions in addressing risk. Specifically, MMS’s practice of negotiating the cost of transporting gas through pipelines helps to secure relatively easy access to pipelines. Similarly, MMS’s practice of grouping the properties that produce royalty oil or gas according to the pipelines to which they are connected helps ensure that properties produce relatively large volumes of oil or gas. MMS has also arranged for the processing of natural gas and has increased its knowledge of oil and gas marketing by hiring consultants and interviewing oil and gas marketers and representatives of pipeline companies in Wyoming and the Gulf Coast. MMS has also developed procedures to manage the inherent risks, or uncertainties, in the selling of oil and gas. Such risks include fluctuating oil and gas prices, the varying amount of oil and gas that wells produce, and the credit worthiness of purchasers. To manage the risk associated with fluctuating prices, for example, MMS does not try to maximize revenues by guessing which way the market will move but, instead, accepts bids relative to the fluctuating market prices. Thus, MMS avoids substantial losses that could result from wrong guesses. MMS also manages the risk due to the inability of properties to deliver consistent quantities of gas, which could require that MMS purchase or supply more costly alternative gas in the event of a shortfall. MMS manages this risk by guaranteeing that it will deliver only a portion of the gas (base volume) at a stable price and offering the other portion (swing volume), without guarantee, at published prices that vary daily. MMS has also developed procedures to monitor the credit worthiness of oil and gas purchasers and can terminate their sales contract or demand additional credit guarantees, if necessary. These procedures led MMS to promptly cancel its contract with Enron, thereby limiting losses to 1 month’s worth of gas production from the Enron contract. MMS has developed written procedures for conducting RIK sales activities, collecting revenues from these sales, and reporting on these revenues. Sales activities include identifying properties from which to take oil and gas in kind, announcing the oil and gas for sale, determining a minimum acceptable bid, analyzing bids, and awarding contracts. We examined documents for sales that MMS conducted from October 1998 through October 2002 and found documentation of these activities in all sales in which they were appropriate. However, we did not determine the adequacy of MMS’s procedures for collecting and reporting on revenues, nor did we assess the degree to which MMS complied with these procedures. MMS developed the following seven strategic objectives for the RIK Program: Implement RIK where applicable and when it is an improvement over traditional cash royalty payments (royalty in value). Leverage MMS’s position as an asset holder. Take advantage of potential interagency synergies. Minimize the cost of royalty administration. Reduce business cycle time (the time to collect, disburse, audit, and reconcile revenues). Accelerate timing of revenue collections. Adopt energy industry business practices and controls wherever feasible. Overall, none of the seven objectives address the revenue impacts of the RIK sales. The seven objectives do not address requirements in the law that MMS (1) collect at least as much revenue from the RIK pilots as it would have from traditional cash royalty payments and (2) obtain fair market value. The Congress directed MMS in the fiscal years 2001 and 2002 Appropriations Acts for Interior and Related Agencies to collect at least as much revenue from the sale of royalties in kind as MMS would have collected from traditional cash royalty payments. Moreover, the Congress had previously directed the Secretary of the Interior in the Mineral Leasing Act of 1920 and the Outer Continental Shelf Lands Act of 1953 to obtain fair market value for oil and gas taken in kind. The Congress defined “fair market value” in the Outer Continental Shelf Lands Act as the average unit price for the mineral sold either from the same lease or, if such sales did not occur, in the same geographic area. Furthermore, the first three objectives are not expressed in either a quantitative or measurable form. The last four objectives, although being quantitative, address administrative efficiency only. Without objectives to guide agency staff in the quantitative evaluation of the revenue impacts of RIK sales, MMS will be unable to determine whether RIK sales generate more or less revenue than traditional cash royalty payments; whether MMS obtains fair market value; and hence, whether it should convert the RIK pilots to an operational status. MMS has also not developed any performance measures that it linked to the seven strategic objectives for its RIK Program. However, MMS has developed two performance measures—(1) confirm and reconcile, within 90 days, all production royalties taken in kind and (2) accelerate the timing of revenue receipt by 5 days over traditional cash royalty payments (royalty in value)—that are linked to the broader agency-wide objective of “collecting royalties in the shortest time possible.” In addition to supporting the broad agency-wide objective, these two performance measures support RIK Program objectives that are designed to improve administrative efficiency. MMS officials told us that they intend to develop performance measures that are specific to the RIK Program in 2004, when the RIK Program changes from the pilot status to a fully operational status and they acquire and fully implement new information systems that can better measure performance. After 5 years of conducting pilot programs and completing 24 oil and gas pilot sales, MMS’s ability to effectively and efficiently monitor and evaluate its RIK Program is limited because it has not obtained the necessary information to do so. This information includes the administrative costs of the RIK Program, the savings from avoiding potential litigation and appeals, the savings in auditing properties, and the revenue impacts of all sales. MMS lacks information largely because it has not developed an information systems infrastructure to rapidly and efficiently collect this information. Without quantitative costs, savings, and revenue information, MMS is unable to determine the program’s overall cost and effectiveness, whether RIK generates at least as much revenue as traditional cash royalty payments, and whether the RIK Program should be expanded or contracted. MMS has not quantified the costs of administering the RIK Program. Such costs, which MMS incurs when selling RIK but does not incur when collecting traditional cash royalty payments, result from identifying properties from which to sell oil and gas, calculating minimum acceptable bids, analyzing bids, awarding and monitoring contracts, billing purchasers, negotiating transportation rates, reconciling discrepancies in volume, and comparing RIK revenues with traditional cash royalty payments. MMS has not quantified these costs because its current personnel, payroll, and budgeting systems do not capture data in sufficient detail. Although MMS tracks employees’ time charges with these systems, MMS does not distinguish between time charges that support only the RIK Program and time charges that support both the RIK Program and the traditional system of collecting cash royalties. Similarly, MMS has not decided how to assign the cost of MMS’s financial system and other significant overhead costs to the RIK Program and to the traditional cash royalty system. MMS officials told us, however, that they plan to implement an activity-based cost-accounting system in fiscal year 2003 that will assist in resolving these issues. MMS also has not quantified anticipated savings from avoiding potential appeals and litigation by selling oil and gas in kind. MMS officials explained that MMS anticipates that it can avoid substantial costs associated with appeals and litigation involving primarily the valuation of natural gas and the transportation of both oil and gas. MMS officials have not estimated the costs of appeals because of problems with implementing the information system that tracks these costs and because of their uncertainty that these costs are recorded in a consistent manner. In addition, the Office of the Solicitor within the Department of the Interior, which is responsible for litigation concerning MMS’s activities, does not have an automated system to track litigation costs. Although MMS anticipates that the cost of auditing revenues will decrease because of taking RIK, MMS has not quantified these savings. MMS anticipates substantial savings because verifying the value of oil and gas is much easier when taking RIK because the purchaser and MMS agree to the sales price before the sale occurs. Similarly, when MMS negotiates transportation costs itself, it knows the exact transportation rate that companies can charge MMS, unlike when companies pay royalties in cash. In addition, MMS does not need to audit transportation costs when MMS sells royalty oil or gas at the location of the lease because there are no transportation costs, since the buyer assumes the responsibility for transportation. Although MMS has projected decreases in the number of staff auditors as a result of future RIK sales, MMS has not finalized these estimated savings because MMS is uncertain of how much oil and gas it will take in kind in the future. MMS officials also question the reliability of the time that auditors have charged to the RIK Program in the past— information that formed the baseline for their projections. MMS also has not fully quantified the revenue impacts of all the royalty oil and gas that it sold, preventing a comprehensive comparison between RIK sales revenues and the revenues that MMS would have received under the traditional cash royalty system. MMS does analyze factors that affect the revenues of upcoming RIK sales, including current oil and gas prices; anticipated market conditions; and transportation and processing, if applicable. However, MMS does not systematically compare RIK sales revenues with what it would have received in traditional cash royalties after these gas sales are completed. Of the 15.8 million barrels of federal royalty oil sold in pilot sales from October 1998 through July 2002, MMS quantified the revenue impacts of about 9 percent. Of the approximately 241 billion cubic feet of federal royalty gas that MMS sold from December 1998 through March 2002, we estimate that MMS quantified, either in whole or in part, the revenue impacts resulting from the sale of about 44 percent of this gas. Although MMS analyzed revenue impacts from 44 percent of the federal royalty gas it sold, almost none of this analysis was done in a timely manner, thereby precluding the use of this information to improve or modify subsequent sales. For example, MMS did not complete the evaluation of the gas that it sold competitively each month over a 19- month period until after it had discontinued selling gas in this manner. Similarly, MMS did not evaluate the revenue impacts of using a gas marketer to aggregate gas volumes until 1 year after it terminated these sales. If MMS had evaluated these aggregated sales earlier, it might have discontinued this method of selling royalty gas because it would have confirmed employees’ suspicions during the initial sale that the manner in which gas was being sold was disadvantageous to MMS. Instead, MMS let another three contracts with similar terms, resulting in an overpayment of almost $3 million on transportation valued at about $13 million. MMS’s information systems hinder the timely monitoring and evaluation of the RIK Program and the evaluation of the revenue impacts from individual sales. The RIK Program’s current system for managing RIK sales revenues consists of a series of unlinked computer spreadsheets into which personnel manually enter RIK data. Such a manual system is prone to errors, which could lead to inaccurate information. Prior to September 2002, RIK Program personnel did not compile basic monthly reports on revenues collected and royalty volumes sold, which could have been used to monitor the RIK Program on a periodic basis. Also, limitations of MMS’s agency-wide financial system—the system that generates agency-wide accounting reports and maintains and manages all royalty data—currently hamper the timely comparison of RIK sales revenues with cash royalty payments. MMS personnel were unable to use the financial system to produce summary data that were more current than 1-year old. As of October 2002, for example, MMS personnel were unable to use the financial system to determine how much total revenue MMS collected and how much oil and gas had been produced from federal lands since September 2001. MMS personnel also said that because of missing or erroneous data in the agency-wide financial system, data extracted from this system cannot be used in revenue comparisons without time- consuming checks for accuracy and reasonableness. Furthermore, it will be more difficult to use RIK gas data in this system to calculate revenue impacts because MMS personnel do not enter these data at the lease level. Lastly, RIK Program personnel said that because they have to manually acquire data to evaluate federal properties for prospective sales, the growth of the RIK Program has slowed. MMS officials also said that they have not evaluated the revenue impacts from the sales of all royalty oil and gas largely because they have delayed the development of performance measures for the RIK Program until 2004. These performance measures will incorporate benchmarks against which to compare RIK sales revenues. MMS personnel said that MMS has generally encountered difficulty in establishing benchmarks against which to measure the revenue impacts of RIK oil and gas sales because once it takes all federal royalty oil or gas in kind in a specific area, it no longer receives any traditional cash royalty payments for comparison. However, MMS officials explained that by 2004, MMS expects to acquire and fully implement two additional information systems dedicated to the RIK Program that will automate the acquisition of necessary information for attempting revenue comparisons. MMS personnel said that they had not acquired these automated systems earlier because they believed that they first needed to process a large number of transactions and sell a large volume of oil and gas before they could justify the expense of acquiring these systems. MMS has begun to establish management control over its RIK Program. It has initiated positive steps to address the risks that affect its oil and gas sales and has developed written procedures for various activities within the RIK Program. MMS has also made progress in documenting the results of its RIK sales. However, MMS has not established clear objectives for the program that are linked to statutory requirements. MMS’s current objectives for its RIK Program are not clearly linked to requirements in the law that MMS (1) collect at least as much during pilot sales as it would have collected in cash royalty payments and (2) obtain fair market value. In addition to the lack of objectives linked to statutory requirements, MMS is not systematically collecting the necessary information to monitor and evaluate the RIK program. Such information includes the administrative costs of the RIK program, anticipated savings from reductions in audit efforts and from avoiding appeals and litigation, and the revenue impacts of all sales. Without clear objectives and the systematic collection of evaluative information, MMS cannot assess and ultimately determine whether it should expand or contract the use of royalty in kind sales. To continue the further development of management control for the Minerals Management Service’s Royalty-in-Kind Program, we recommend that the Secretary of the Interior instruct the appropriate managers within the Minerals Management Service to do the following: Clarify the Royalty-in-Kind Program’s strategic objectives to explicitly state that goals of the Royalty-in-Kind pilots include obtaining fair market value and collecting at least as much revenue as MMS would have collected in cash royalty payments. Prior to expanding the Royalty-in-Kind Program, identify and acquire key information needed to monitor and evaluate performance. Such information, as identified by the Minerals Management Service, should include the revenue impacts of all Royalty-in-Kind sales, administrative costs of the Royalty-in-Kind Program, estimates of savings in avoiding potential litigation, and expected savings in auditing revenues. We provided the Department of the Interior with a draft of this report for review and comment. Interior fundamentally agreed with our observations and recommendations and emphasized MMS’s future plans for improving management control over the RIK Program. Where appropriate, we have included additional references to the activities that Interior mentions in its comments. Interior’s comments and our response to these comments are reproduced in appendix I. In reviewing MMS’s RIK Program, we reviewed congressional directives in pertinent legislation; standards for the development of management control issued by us and the Office of Management and Budget; and prior reports and documentation on the Small Refiners Program, Strategic Petroleum Reserve, and RIK pilots. We also obtained statistical information from MMS on oil and gas volumes taken in kind and the revenue that MMS generated by selling these volumes. In addition, we reviewed documentation pertaining to management control and interviewed MMS personnel about their efforts to establish management control over the RIK Program. We conducted our work from January to November 2002 in accordance with generally accepted government auditing standards. For a more detailed discussion of the scope and methodology of our review, see appendix II. As agreed with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 7 days from the date of this letter. At that time, we will send copies of this report to the Secretary of the Interior; the Director, Office of Management and Budget; and other interested parties. We will also make copies available to others upon request. This report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you have any questions about this report, please call Mark Gaffigan or me at (202) 512-3841. Key contributors to this report are listed in appendix III. The following are GAO’s comments on the Department of the Interior’s letter dated December 13, 2002. 1. We clarified our report to reflect these comments. 2. We acknowledge that the Mineral’s Management Service’s (MMS) difficulties in obtaining royalty data from its financial system may be due, in part, to the court-ordered shutdown of this financial system in December 2001. However, 9 months had passed since operation of the financial system was restored on March 23, 2002. Additionally, MMS personnel said that the statistical subsystem designed to generate routine summary data that we requested for October 2001 through July 2002 had not yet been deployed and was not expected to be deployed until April 2003 at the earliest. 3. We expressed Royalty-in-Kind (RIK) volumes as a percentage of total federal royalty oil and gas volumes to show the overall significance of taking royalties in kind compared with receiving cash royalty payments. Using percentages also made it easier to show that large percentages of oil were taken in kind for the Strategic Petroleum Reserve (SPR) and for the Small Refiners Program relative to the small percentages taken for pilot purposes. In expressing RIK volumes as percentages, we used actual RIK sales volumes supplied by MMS but had to estimate the total federal royalty volumes because MMS does not maintain these data. 4. In this report, we state that MMS’s strategic objectives do not address the requirements in the law because nowhere in the seven strategic objectives is there reference to the terms “fair market value” or “collecting at least as much revenue as would have been collected in cash royalty payments.” In its response, Interior states that it has intended to accomplish these legislative mandates, and Interior apparently believes that these intentions are implied by the strategic objective stating that MMS will implement RIK “when it is an improvement over traditional cash royalty payments.” In light of Interior’s agreeing with us that the objectives for the RIK Program should include achieving fair market value and collecting revenues at least equal to what MMS would have collected in cash royalty payments, we continue to recommend that MMS clarify the language in its strategic objectives to reflect these intentions. 5. We acknowledge that MMS performs substantial analysis prior to converting leases from traditional cash royalty status to RIK. For oil sales, MMS generally calculated a minimum acceptable bid that bidders had to exceed before MMS made an award. For gas sales, MMS relied upon gas indexes to assess bids. While relying on minimum acceptable bids and gas indexes prior to a sale is a first step in ensuring that RIK revenues will equal or exceed cash royalty payments, MMS cannot determine actual revenue impacts until after the sales are completed. To effectively monitor and evaluate the performance of the RIK pilot sales, MMS should calculate revenue impacts in a timely manner after sales are completed and adjust future sales on the basis of these results. Relying on codified valuation regulations as an indicator of what MMS would have collected in cash royalty payments is not as straightforward as Interior implies, and the application of valuation regulations is often a source of dispute between MMS and industry. For example, MMS often does not know which provision of the valuation regulations will apply to future royalty collections from a given lease until after the sale. Also, MMS’s market analyses suggests that many of the provisions for valuing oil and gas sold to affiliated companies may no longer reflect the manner in which many companies buy and sell oil and gas today. To compensate for these uncertainties, MMS must use considerable judgment in estimating revenue impacts prior to RIK sales. While MMS has evaluated the revenue impacts after some completed sales, MMS has not evaluated the revenue impacts of all sales. We point out in this report that MMS evaluated the revenue impacts, either in whole or in part, of about 9 percent of the oil sold in kind and about 44 percent of the gas sold in kind. With regards to the Wyoming oil pilots and the Texas 8(g) gas pilots that Interior mentions in commenting on this report, MMS evaluated and published the results of 3 of the 8 completed pilot sales in Wyoming and 19 of the 29 monthly Texas 8(g) sales. Furthermore, only a few of MMS’s analyses were done in a timely manner, precluding MMS from using this information to modify subsequent sales. For example, MMS did not analyze the revenue impacts of the Texas 8(g) monthly sales or its aggregated gas sales until after it had discontinued selling gas by these methods. However, we encourage MMS to analyze the revenue impacts of its Gulf of Mexico oil pilots despite these sales’ current suspension because the oil from these properties is being transferred to the SPR. The results of such a study could be useful, should MMS continue the Gulf of Mexico oil pilots in the future. 6. MMS supplied us with the estimated loss of about $3 million on the aggregation contracts. We calculated that transportation was worth about $13 million on the basis of transportation costs and volumes supplied by MMS. MMS reported that the total value of royalty payments on the aggregated gas was about $363 million. 7. Our assessment that MMS has difficulty obtaining royalty information from its financial system is based largely on MMS personnel, who have used these data to estimate the revenue impacts of RIK sales and told us that they could not use these data without first performing time- consuming checks for accuracy and reasonableness. At our request, these personnel supplied us with royalty data from nine Wyoming oil properties that we estimate accounted for about 50 percent of the oil sold during the Wyoming pilots. Although we did not find widespread systemic problems with this small data set, we confirmed that a small amount of missing, incomplete, and inaccurate data, in addition to numerous modifications of data entries by payors (adjustments), precluded using these data for calculating revenue impacts without first inspecting these data for accuracy and reasonableness. We confirmed that the manual inspection of these data was time-consuming. In addition, MMS personnel told us that RIK gas data are not entered into the system at the lease level, and we believe this will complicate comparing RIK revenues with cash royalty payments. In this report, we discuss (1) the extent to which the Minerals Management Service has taken federal royalties in kind since 1995 and the reasons for doing so and (2) the status of MMS’s efforts to implement management controls for its RIK program. To determine the extent to which and the purposes for which MMS has taken RIK since 1995, we reviewed legislative directives concerning RIK in the Mineral Leasing Act of 1920, the Outer Continental Shelf Lands Act of 1953, and the Appropriations Acts for the Interior and Related Agencies for fiscal years 1995 though 2002. We also reviewed presidential directives for using federal royalty oil to fill the SPR. We reviewed prior reports and other documentation on the Small Refiners Program, the SPR, and the RIK pilots in Wyoming and the Gulf of Mexico. We then asked MMS personnel to supply data on the amount and values of federal royalty oil and gas taken in kind and of total oil and gas royalties from January 1995 through July 2002. Although MMS personnel within the RIK Program could supply data on RIK revenues and volumes taken in kind during this time period, they could not supply data on total royalty revenues and the total amount of oil and gas produced on federal lands that were more current than September 2001. We did not review the accuracy of these figures. To review the status of MMS’s efforts to implement management control over its RIK Program, we reviewed the Federal Managers’ Financial Integrity Act of 1982, the standards for management control that we issued entitled Standards for Internal Control in the Federal Government (GAO/AIMD-00-21.3.1, November 1999), and the implementation guidance issued by the Office of Management and Budget in OMB Circular A-123. We also reviewed our tool for assessing an agency’s management controls entitled Internal Control Management and Evaluation Tool (GAO-01- 1008G, August 2001) and our guide for assessing an agency’s strategic plan entitled Agencies’ Strategic Plans Under GPRA: Key Questions to Facilitate Congressional Review (GAO/GGD-10.1.16, May 1997). Standards for Internal Control in the Federal Government establishes the criteria that agencies must meet in developing and maintaining management control, which is not one event but a series of actions and activities that occur throughout an agency’s operations on an ongoing basis. Our review focused on MMS’s efforts to address risks that could affect the RIK Program and on some management control activities that we identified as being critical to MMS’s implementation and management of the program. These management control activities are (1) developing strategic objectives, (2) linking performance measures to these objectives, (3) obtaining the necessary data for making management decisions and for monitoring and evaluating the RIK Program, and (4) developing written procedures and documenting compliance with these procedures. We assessed MMS’s efforts to establish these management control activities by reviewing relevant documentation and interviewing MMS personnel. We reviewed MMS’s efforts to mitigate the risks associated with differences in the properties that produce federal oil and gas, fluctuating oil and gas prices, disruptions in production, and credit worthiness. In assessing strategic objectives and linked performance measures, we reviewed these objectives and measures for their results-orientation, clarity, specificity, ability to be expressed quantitatively or in a measurable form, and consistency with congressional directives. In reviewing the availability of key data for management decisions and monitoring and evaluating the RIK Program, we assessed the extent to which MMS had determined the revenue impacts of all RIK sales, the administrative cost of operating the RIK Program relative to collecting cash royalties, and the expected savings from avoiding litigation and appeals and simplifying auditing. We also examined whether MMS had compared revenue impacts from each RIK sale with expected revenues from traditional cash royalty payments or other benchmarks and assessed whether MMS had collected monthly RIK revenues and sales volumes for monitoring purposes. In reviewing MMS’s efforts to develop written procedures, we determined if written procedures existed as of January 1, 2002, for conducting sales activities, collecting revenues, and reporting on these revenues. We determined major sales activities to be the selection of properties from which to sell RIK, the announcement of the sale, the calculation of a minimum acceptable bid, the evaluation of bids, and the determination of the winning bidders. For each sale completed as of October 2002, we also reviewed whether MMS documented these major activities. However, we did not assess the adequacy of written procedures to collect and report on revenues, nor did we assess MMS’s compliance with these procedures. Because at the time of our review, MMS had not implemented an automated system to support the RIK Program, we reviewed its current manual system and its efforts to acquire automated systems. In addition to those named above, Letha Angelo, Ronald Belak, Robert Crystal, Cynthia Norris, Frank Rusco, Dawn Shorey, Jamelyn Smith, and Maria Vargas made key contributions to this report.
In fiscal year 2001, the federal government collected $7.5 billion in royalties from the sale of oil and gas produced on federal lands. Although most oil and gas companies pay royalties in cash, the Department of the Interior's Minerals Management Service (MMS) has the option to take a percentage of the oil and gas produced and either transfer this percentage to other federal agencies or to sell this percentage itself--known as "taking royalties in kind." GAO reviewed the extent to which MMS has taken royalties in kind since 1995, the reasons for taking royalties in kind, and MMS's progress in implementing management control over its Royalty-in-Kind Program. From January 1995 through September 2001, the Minerals Management Service (MMS) took, in kind, 178 million barrels of oil and 213 billion cubic feet of gas, or 32 percent of the federal government's royalty share of all oil and 3 percent of the federal government's royalty share of all gas produced on federal lands. MMS sold the majority of this oil--143 million barrels--to small refiners in accordance with long-standing legislation. MMS also took 29 million barrels of federal royalty oil to fill the Strategic Petroleum Reserve. MMS took the remaining 6 million barrels of oil in kind and all the gas in kind under a series of pilot projects to evaluate whether there are additional circumstances under which taking royalties in kind is in the best interest of the federal government. MMS personnel have made progress in implementing some components of management control for its Royalty-in-Kind Program, such as addressing the risks associated with oil and gas sales and developing written procedures. However, MMS does not plan to complete and implement all management controls until 2004, when it will consider the Royalty-in-Kind pilots to have changed from a pilot stage to a fully operational stage and when it will have acquired additional systems support. To date, MMS has not developed clear strategic objectives linked to statutory requirements nor collected the necessary information to effectively monitor and evaluate the Royalty-in-Kind Program. Without clear objectives linked to statutory requirements and the collection of necessary information, MMS cannot systematically assess whether Royalty-in-Kind sales are administratively less costly, whether they generate fair market value or at least as much revenue as traditional cash royalty, payments, and thus whether MMS should expand or contract the Royalty-in-Kind Program.
VA operates one of the largest health care systems in America, providing care to millions of veterans and their families each year. The department’s health information system—VistA—serves an essential role in helping the department to fulfill its health care delivery mission. Specifically, VistA is an integrated medical information system that was developed in-house by the department’s clinicians and information technology (IT) personnel, and has been in operation since the early 1980s. The system consists of 104 separate computer applications, including 56 health provider applications; 19 management and financial applications; 8 registration, enrollment, and eligibility applications; 5 health data applications; and 3 information and education applications. Within VistA, an application called the Computerized Patient Record System enables the department to create and manage an individual electronic health record for each VA patient. Electronic health records are particularly crucial for optimizing the health care provided to veterans, many of whom may have health records residing at multiple medical facilities within and outside the United States. Taking these steps toward interoperability—that is, collecting, storing, retrieving, and transferring veterans’ health records electronically—is significant to improving the quality and efficiency of care. One of the goals of interoperability is to ensure that patients’ electronic health information is available from provider to provider, regardless of where it originated or resides. Since 1998, VA has undertaken a patchwork of initiatives with DOD to allow the departments’ health information systems to exchange information and increase interoperability. Among others, these have included initiatives to share viewable data in the two departments’ existing (legacy) systems, link and share computable data between the departments’ updated heath data repositories, and jointly develop a single integrated system that would be used by both departments. Table 1 summarizes a number of these key initiatives. In addition to the initiatives mentioned in table 1, VA has worked in conjunction with DOD to respond to provisions in the National Defense Authorization Act for Fiscal Year 2008, which required the departments to jointly develop and implement fully interoperable electronic health record systems or capabilities in 2009. Yet, even as the departments undertook numerous interoperability and modernization initiatives, they faced significant challenges and slow progress. For example, VA’s and DOD’s success in identifying and implementing joint IT solutions has been hindered by an inability to articulate explicit plans, goals, and time frames for meeting their common health IT needs. In March 2011, the secretaries of VA and DOD announced that they would develop a new, joint integrated electronic health record system (referred to as iEHR). This was intended to replace the departments’ separate systems with a single common system, thus sidestepping many of the challenges they had previously encountered in trying to achieve interoperability. However, in February 2013, about 2 years after initiating iEHR, the secretaries announced that the departments were abandoning plans to develop a joint system, due to concerns about the program’s cost, schedule, and ability to meet deadlines. The Interagency Program Office (IPO), put in place to be accountable for VA’s and DOD’s efforts to achieve interoperability, reported spending about $564 million on iEHR between October 2011 and June 2013. In light of VA and DOD not implementing a solution that allowed for the seamless electronic sharing of health care data, the National Defense Authorization Act for Fiscal Year 2014 included requirements pertaining to the implementation, design, and planning for interoperability between the departments’ electronic health record systems. Among other actions, provisions in the act directed each department to (1) ensure that all health care data contained in their systems (VA’s VistA and DOD’s Armed Forces Health Longitudinal Technology Application, referred to as AHLTA) complied with national standards and were computable in real time by October 1, 2014; and (2) deploy modernized electronic health record software to support clinicians while ensuring full standards-based interoperability by December 31, 2016. In August 2015, we reported that VA, in conjunction with DOD, had engaged in several near-term efforts focused on expanding interoperability between their existing electronic health record systems. For example, the departments had analyzed data related to 25 “domains” identified by the Interagency Clinical Informatics Board and mapped health data in their existing systems to standards identified by the IPO. The departments also had expanded the functionality of their Joint Legacy Viewer—a tool that allows clinicians to view certain health care data from both departments in a single interface. More recently, in April 2016, VA and DOD certified that all health care data in their systems complied with national standards and were computable in real time. However, VA acknowledged that it did not expect to complete a number of key activities related to its electronic health record system until sometime after the December 31, 2016, statutory deadline for deploying modernized electronic health record software with interoperability. Specifically, the department stated that deployment of a modernized VistA system at all locations and for all users is not planned until 2018. Even as VA has undertaken numerous initiatives with DOD that were intended to advance electronic health record interoperability, a significant concern is that these departments have not identified outcome-oriented goals and metrics to clearly define what they aim to achieve from their interoperability efforts, and the value and benefits these efforts are expected to yield. As we have stressed in our prior work and guidance, assessing the performance of a program should include measuring its outcomes in terms of the results of products or services. In this case, such outcomes could include improvements in the quality of health care or clinician satisfaction. Establishing outcome-oriented goals and metrics is essential to determining whether a program is delivering value. The IPO is responsible for monitoring and reporting on VA’s and DOD’s progress in achieving interoperability and coordinating with the departments to ensure that these efforts enhance health care services. Toward this end, the office issued guidance that identified a variety of process-oriented metrics to be tracked, such as the percentage of health data domains that have been mapped to national standards. The guidance also identified metrics to be reported that relate to tracking the amounts of certain types of data being exchanged between the departments, using existing capabilities. This would include, for example, laboratory reports transferred from DOD to VA via the Federal Health Information Exchange and patient queries submitted by providers through the Bidirectional Health Information Exchange. Nevertheless, in our August 2015 report, we noted that the IPO had not specified outcome-oriented metrics and goals that could be used to gauge the impact of the interoperable health record capabilities on the departments’ health care services. At that time, the acting director of the IPO stated that the office was working to identify metrics that would be more meaningful, such as metrics on the quality of a user’s experience or on improvements in health outcomes. However, the office had not established a time frame for completing the outcome-oriented metrics and incorporating them into the office’s guidance. In the report, we stressed that using an effective outcome-based approach could provide the two departments with a more accurate picture of their progress toward achieving interoperability, and the value and benefits generated. Accordingly, we recommended that the departments, working with the IPO, establish a time frame for identifying outcome- oriented metrics; define related goals as a basis for determining the extent to which the departments’ modernized electronic health record systems are achieving interoperability; and update IPO guidance accordingly. Both departments concurred with our recommendations. Further, since that time, VA has established a performance architecture program that has begun to define an approach for identifying outcome-oriented metrics focused on health outcomes in selected clinical areas, and it also has begun to establish baseline measurements. We intend to continue monitoring the department’s efforts to determine how these metrics define and report on the results achieved by interoperability between the departments. Following the termination of the iEHR initiative, VA moved forward with an effort to modernize VistA separately from DOD’s planned acquisition of a commercially available electronic health record system. The department took this course of action even though it has many health care business needs in common with those of DOD. For example, in May 2010, VA (and DOD) issued a report on medical IT to Congressional committees that identified 10 areas—inpatient documentation, outpatient documentation, pharmacy, laboratory, order entry and management, scheduling, imaging and radiology, third-party billing, registration, and data sharing—in which the departments have common business needs. Further, the results of a 2008 study pointed out that over 97 percent of inpatient requirements for electronic health record systems are common to both departments. We also issued several prior reports regarding the plans for separate systems, in which we noted that the departments did not substantiate their claims that VA’s VistA modernization, together with DOD’s acquisition of a new system, would be achieved faster and at less cost than developing a single, joint system. Moreover, we noted that the departments’ plans to modernize their two separate systems were duplicative and stressed that their decisions should be justified by comparing the costs and schedules of alternate approaches. We recommended that VA and DOD develop cost and schedule estimates that would include all elements of their approach (i.e., modernizing both departments’ health information systems and establishing interoperability between them) and compare them with estimates of the cost and schedule for developing a single, integrated system. If the planned approach for separate systems was projected to cost more or take longer, we recommended that the departments provide a rationale for pursuing such an approach. VA, as well as DOD, agreed with our recommendations and stated that an initial comparison had indicated that the approach involving separate systems would be more cost effective. However, as of June 2016, the departments had not provided us with a comparison of the estimated costs of their current and previous approaches. Further, with respect to their assertions that separate systems could be achieved faster, both departments had developed schedules which indicated that their separate modernization efforts are not expected to be completed until after the 2017 planned completion date for the previous single-system approach. As VA has proceeded with its program to modernize VistA (known as VistA Evolution), the department has developed a number of plans to support its efforts. These include an interoperability plan and a road map describing functional capabilities to be deployed through fiscal year 2018. Specifically, these documents describe the department’s approach for modernizing its existing electronic health record system through the VistA Evolution program, while helping to facilitate interoperability with DOD’s system and the private sector. For example, the VA Interoperability Plan, issued in June 2014, describes activities intended to improve VistA’s technical interoperability, such as standardizing the VistA software across the department to simplify sharing data. In addition, the VistA 4 Roadmap, last revised in February 2015, describes four sets of functional capabilities that are expected to be incrementally deployed during fiscal years 2014 through 2018 to modernize the VistA system and enhance interoperability. According to the road map, the first set of capabilities was delivered by the end of September 2014 and included access to the Joint Legacy Viewer and a foundation for future functionality, such as an enhanced graphical user interface and enterprise messaging infrastructure. Another interoperable capability that is expected to be incrementally delivered over the course of the VistA modernization program is the enterprise health management platform. The department has stated that this platform is expected to provide clinicians with a customizable view of a health record that can integrate data from VA, DOD, and third-party providers. Also, when fully deployed, VA expects the enterprise health management platform to replace the Joint Legacy Viewer. However, a recent independent assessment of health IT at VA reported that lengthy delays in modernizing VistA had resulted in the system becoming outdated. Further, this study questioned whether the VistA Evolution program to modernize the electronic health record system can overcome a variety of risks and technical issues that have plagued prior VA initiatives of similar size and complexity. For example, the study raised questions regarding the lack of any clear advances made during the past decade and the increasing amount of time needed for VA to release new health IT capabilities. Given the concerns identified, the study recommended that VA assess the cost versus benefits of various alternatives for delivering the modernized capabilities, such as commercially available off-the-shelf electronic health record systems, open source systems, and the continued development of VistA. In speaking about this matter, VA’s Under Secretary for Health has asserted that the department will follow through on its plans to complete the VistA Evolution program in fiscal year 2018. However, the Chief Information Officer has also indicated that the department is taking a step back in reconsidering how best to meet its electronic health record system needs beyond fiscal year 2018. As such, VA’s approach to addressing its electronic health record system needs remains uncertain. In summary, VA’s approach to pursuing electronic health record interoperability with DOD has resulted in an increasing amount of standardized health data and has made an integrated view of that data available to department clinicians. Nevertheless, a modernized VA electronic health record system that is fully interoperable with DOD’s system is still years away. Thus, important questions remain about when VA intends to define the extent of interoperability it needs to provide the highest possible quality of care to its patients, as well as how and when the department intends to achieve this extent of interoperability with DOD. In addition, VA’s unsuccessful efforts over many years to modernize its VistA system raise concern about how the department can continue to justify the development and operation of an electronic health record system that is separate from DOD’s system, even though the departments have common system needs. Finally, VA’s recent reconsideration of its approach to modernizing VistA raises uncertainty about how it intends to accomplish this important endeavor. Chairman Kirk, Ranking Member Tester, and Members of the Subcommittee, this concludes my prepared statement. I would be pleased to respond to any questions that you may have. If you or your staff have any questions about this testimony, please contact Valerie C. Melvin at (202) 512-6304 or melvinv@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this testimony statement. GAO staff who made key contributions to this statement are Mark T. Bird (Assistant Director), Jennifer Stavros-Turner (Analyst in Charge), Rebecca Eyler, Nancy Glover, Jacqueline Mai, and Scott Pettis. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
VA operates one of the nation's largest health care systems, serving millions of veterans each year. For almost two decades, the department has undertaken a patchwork of initiatives with DOD to increase interoperability between their respective electronic health record systems. During much of this time, VA has also been planning to modernize its system. While the department has made progress in these efforts, it has also faced significant information technology challenges that contributed to GAO's designation of VA health care as a high risk area. This statement summarizes GAO's August 2015 report (GAO-15-530) on VA's efforts to achieve interoperability with DOD's electronic health records system. It also summarizes key content from GAO's reports on duplication, overlap, and fragmentation of federal government programs. Lastly, this statement provides updated information on VA's actions in response to GAO's recommendation calling for an interoperability and electronic health record system plan. Even as the Department of Veterans Affairs (VA) has undertaken numerous initiatives with the Department of Defense (DOD) that were intended to advance the ability of the two departments to share electronic health records, the departments have not identified outcome-oriented goals and metrics to clearly define what they aim to achieve from their interoperability efforts. In an August 2015 report, GAO recommended that the two departments establish a time frame for identifying outcome-oriented metrics, define related goals as a basis for determining the extent to which the departments' systems are achieving interoperability, and update their guidance accordingly. Since that time, VA has established a performance architecture program that has begun to define an approach for identifying outcome-oriented metrics focused on health outcomes in selected clinical areas and has begun to establish baseline measurements. GAO is continuing to monitor VA's and DOD's efforts to define metrics and report on the interoperability results achieved between the departments. Following an unsuccessful attempt to develop a joint system with DOD, VA switched tactics and moved forward with an effort to modernize its current system separately from DOD's planned acquisition of a commercially available electronic health record system. The department took this course of action even though, in May 2010, it identified 10 areas of health care business needs in common with those of DOD. Further, the results of a 2008 study pointed out that more than 97 percent of inpatient requirements for electronic health record systems are common to both departments. GAO noted that the departments' plans to separately modernize their systems were duplicative and recommended that their decisions should be justified by comparing the costs and schedules of alternate approaches. The departments agreed with GAO's recommendations and stated that their initial comparison indicated that separate systems would be more cost effective. However, the departments have not provided a comparison of the estimated costs of their current and previous approaches. Further, both departments developed schedules that indicated their separate modernization efforts will not be completed until after the 2017 planned completion date for the previous joint system approach. VA has developed a number of plans to support its development of its electronic health record system, called VistA, including a plan for interoperability and a road map describing functional capabilities to be deployed through fiscal year 2018. According to the road map, the first set of capabilities was delivered by the end of September 2014 and included a foundation for future functionality, such as an enhanced graphical user interface and enterprise messaging infrastructure. However, a recent independent assessment of health information technology (IT) at VA reported that lengthy delays in modernizing VistA had resulted in the system becoming outdated. Further, this study questioned whether the modernization program can overcome a variety of risks and technical issues that have plagued prior VA initiatives of similar size and complexity. Although VA's Under Secretary for Health has asserted that the department will complete the VistA Evolution program in fiscal year 2018, the Chief Information Officer has indicated that the department is reconsidering how best to meet its future electronic health record system needs. In prior reports, GAO has made numerous recommendations to VA to improve the modernization of its IT systems. Among other things, GAO has recommended that VA address challenges associated with interoperability, develop goals and metrics to determine the extent to which the modernized systems are achieving interoperability, and address shortcomings with planning. VA generally agreed with GAO's recommendations.
Social Security provides retirement, disability, and survivor benefits to insured workers and their dependents. Insured workers are eligible for reduced benefits at age 62 and full retirement benefits between age 65 and 67, depending on their year of birth. Social Security retirement benefits are based on the worker’s age and career earnings, are fully indexed for inflation after retirement, and replace a relatively higher proportion of wages for career low-wage earners. Social Security’s primary source of revenue is the Old Age, Survivors, and Disability Insurance (OASDI) portion of the payroll tax paid by employers and employees. The OASDI payroll tax is 6.2 percent of earnings each for employers and employees, up to an established maximum. One of Social Security’s most fundamental principles is that benefits reflect the earnings on which workers have paid taxes. Social Security provides benefits that workers have earned to some degree because of their contributions and those of their employers. At the same time, Social Security helps ensure that its beneficiaries have adequate incomes and do not have to depend on welfare. Toward this end, Social Security’s benefit provisions redistribute income in a variety of ways—from those with higher lifetime earnings to those with lower ones, from those without dependents to those with dependents, from single earners and two-earner couples to one-earner couples, and from those who do not live very long to those who do. These effects result from the program’s focus on helping ensure adequate incomes. Such effects depend to a great degree on the universal and compulsory nature of the program. According to the Social Security trustees’ 2003 intermediate, or best- estimate, assumptions, Social Security’s cash flow is expected to turn negative in 2018. In addition, all of the accumulated Treasury obligations held by the trust funds are expected to be exhausted by 2042. Social Security’s long-term financing shortfall stems primarily from the fact that people are living longer. As a result, the number of workers paying into the system for each beneficiary has been falling and is projected to decline from 3.3 today to about 2 by 2030. Reductions in promised benefits and/or increases in program revenues will be needed to restore the long-term solvency and sustainability of the program. About one-fourth of public employees do not pay Social Security taxes on the earnings from their government jobs. Historically, Social Security did not require coverage of government employees because they had their own retirement systems, and there was concern over the question of the federal government’s right to impose a tax on state governments. However, virtually all other workers are now covered, including the remaining three-fourths of public employees. The 1935 Social Security Act mandated coverage for most workers in commerce and industry, which at that time comprised about 60 percent of the workforce. Subsequently, the Congress extended mandatory Social Security coverage to most of the excluded groups, including state and local employees not covered by a public pension plan. The Congress also extended voluntary coverage to state and local employees covered by public pension plans. Since 1983, however, public employers have not been permitted to withdraw from the program once they are covered. Also, in 1983, the Congress extended mandatory coverage to newly hired federal workers. The Social Security Administration (SSA) estimates that 5.25 million state and local government employees, excluding students and election workers, are not covered by Social Security. SSA also estimates that annual wages for these noncovered employees totaled about $171 billion in 2002. In addition, 1 million federal employees hired before 1984 are also not covered. Seven states—California, Colorado, Illinois, Louisiana, Massachusetts, Ohio, and Texas—account for more than 75 percent of the noncovered payroll. Most full-time public employees participate in defined benefit pension plans. Minimum retirement ages for full benefits vary; however, many state and local employees can retire with full benefits at age 55 with 30 years of service. Retirement benefits also vary, but they are usually based on a specified benefit rate for each year of service and the member’s final average salary over a specified time period, usually 3 years. For example, plans with a 2-percent rate replace 60 percent of a member’s final average salary after 30 years of service. In addition to retirement benefits, a 1994 U.S. Department of Labor survey found that all members have a survivor annuity option, 91 percent have disability benefits, and 62 percent receive some cost-of-living increases after retirement. In addition, in recent years, the number of defined-contribution plans, such as 401(k) plans and the Thrift Savings Plan for federal employees, has been growing and becoming a relatively more common way for employers to offer pension plans; public employers are no exception to this trend. Even though noncovered employees may have many years of earnings on which they do not pay Social Security taxes, they can still be eligible for Social Security benefits based on their spouses’ or their own earnings in covered employment. SSA estimates that 95 percent of noncovered state and local employees become entitled to Social Security as workers, spouses, or dependents. Their noncovered status complicates the program’s ability to target benefits in the ways it is intended to do. To address the fairness issues that arise with noncovered public employees, Social Security has two provisions—GPO, which addresses spouse and survivor benefits and WEP, which addresses retired worker benefits. Both provisions depend on having complete and accurate information that has proven difficult to get. Also, both provisions are a source of confusion and frustration for public employees and retirees. As a result, proposals have been offered to revise or eliminate both provisions. Under the GPO provision, enacted in 1977, SSA must reduce Social Security benefits for those receiving noncovered government pensions when their entitlement to Social Security is based on another person’s (usually their spouse’s) Social Security coverage. Their Social Security benefits are to be reduced by two-thirds of the amount of their government pension. Under the WEP, enacted in 1983, SSA must use a modified formula to calculate the Social Security benefits people earn when they have had a limited career in covered employment. This formula reduces the amount of payable benefits. Regarding GPO, spouse and survivor benefits were intended to provide some Social Security protection to spouses with limited working careers. The GPO provision reduces spouse and survivor benefits to persons who do not meet this limited working career criterion because they worked long enough in noncovered employment to earn their own pension. Regarding WEP, the Congress was concerned that the design of the Social Security benefit formula provided unintended windfall benefits to workers who spent most of their careers in noncovered employment. The formula replaces a higher portion of preretirement Social Security-covered earnings when people have low average lifetime earnings than it does when people have higher average lifetime earnings. People who work exclusively, or have lengthy careers, in noncovered employment appear on SSA’s earnings records as having no covered earnings or a low average of covered lifetime earnings. As a result, people with this type of earnings history benefit from the advantage given to people with low average lifetime earnings when in fact their total (covered plus noncovered) lifetime earnings were higher than they appear to be for purposes of calculating Social Security benefits. Both GPO and WEP apply only to those beneficiaries who receive pensions from noncovered employment. To administer these provisions, SSA needs to know whether beneficiaries receive such noncovered pensions. However, our prior work found that SSA lacks payment controls and is often unable to determine whether applicants should be subject to GPO or WEP because it has not developed any independent source of noncovered pension information. In that report, we estimated that failure to reduce benefits for federal, state, and local employees caused $160 million to $355 million in overpayments between 1978 and 1995. In response to our recommendation, SSA performed additional computer matches with the Office of Personnel Management to get noncovered pension data for federal retirees in order to ensure that these provisions are applied. These computer matches detected payment errors; correcting these errors will generate hundreds of millions of dollars in savings, according to our estimates. Also, in that report, we recommended that SSA work with the Internal Revenue Service (IRS) to revise the reporting of pension information on IRS Form 1099R, so that SSA would be able to identify people receiving a pension from noncovered employment, especially in state and local governments. However, IRS does not believe it can make the recommended change without new legislative authority. Given that one of our recommendations was implemented but not the other, SSA now has better access to information for federal employees but not for state and local employees. As a result, SSA cannot apply GPO and WEP for state and local government employees to the same degree that it does for federal employees. To address issues such as these, the President’s budget proposes “to increase Social Security payment accuracy by giving SSA the ability to independently verify whether beneficiaries have pension income from employment not covered by Social Security.” In addition to facing administrative challenges, GPO and WEP have also faced criticism regarding their design in the law. For example, GPO does not apply if an individual’s last day of state/local employment is in a position that is covered by Social Security. This GPO “loophole” raises fairness and equity concerns. In the states we visited for a previous report, individuals with a relatively minimal investment of work time and Social Security contributions gained access to potentially many years of full Social Security spousal benefits. To address this issue, the House recently passed legislation that provides for a longer minimum time period in covered employment. At the same time, GPO and WEP have been a source of confusion and frustration for the roughly 6 million workers and nearly 1 million beneficiaries they affect. Critics of the measures contend that they are basically inaccurate and often unfair. For example, some opponents of WEP argue that the formula adjustment is an arbitrary and inaccurate way to estimate the value of the windfall and causes a relatively larger benefit reduction for lower-paid workers. A variety of proposals have been offered to either revise or eliminate them. While we have not studied these proposals in detail, I would like to offer a few observations to keep in mind as you consider them. First, repealing these provisions would be costly in an environment where the Social Security trust funds already face long-term solvency issues. According to SSA and the Congressional Budget Office (CBO), proposals to reduce the number of beneficiaries subject to GPO would cost $5 billion or more over the next 10 years and increase Social Security’s long-range deficit by up to 1 percent. Eliminating GPO entirely would cost $21 billion over 10 years and increase the long-range deficit by about 3 percent. Similarly, a proposal that would reduce the number of beneficiaries subject to WEP would cost $19 billion over 10 years, and eliminating WEP would increase Social Security’s long-range deficit by 3 percent. Second, in thinking about the fairness of the provisions and whether or not to repeal them, it is important to consider both the affected public employees and all other workers and beneficiaries who pay Social Security taxes. For example, SSA has described GPO as a way to treat spouses with noncovered pensions in a fashion similar to how it treats dually entitled spouses, who qualify for Social Security benefits both on their own work records and their spouses’. In such cases, each spouse may not receive both the benefits earned as a worker and the full spousal benefit; rather the worker receives the higher amount of the two. If GPO were eliminated or reduced for spouses who had paid little or no Social Security taxes on their lifetime earnings, it might be reasonable to ask whether the same should be done for dually entitled spouses who have paid Social Security on all their earnings. Far more spouses are subject to the dual-entitlement offset than to GPO; as a result, the costs of eliminating the dual-entitlement offset would be commensurately greater. Aside from the issues surrounding GPO and WEP, another aspect of the relationship between Social Security and public employees is the question of mandatory coverage. Making coverage mandatory has been proposed in the past to help address the program’s financing problems. According to Social Security actuaries, doing so would reduce the 75-year actuarial deficit by 10 percent. Mandatory coverage could also enhance inflation- protection for the affected beneficiaries, improve portability, and add dependent benefits in many cases. However, to provide for the same level of retirement income, mandatory coverage could increase costs for the state and local governments that would sponsor the plans. Moreover, if coverage were extended primarily to new state and local employees, GPO and WEP would continue to apply for many years to come for existing employees and beneficiaries even though they would become obsolete in the long run. While Social Security’s solvency problems have triggered an analysis of the impact of mandatory coverage on program revenues and expenditures, the inclusion of such coverage in a comprehensive reform package would need to be grounded in other considerations. In recommending that mandatory coverage be included in the reform proposals, the 1994-1996 Social Security Advisory Council stated that mandatory coverage is basically “an issue of fairness.” The Advisory Council’s report noted that “an effective Social Security program helps to reduce public costs for relief and assistance, which, in turn, means lower general taxes. There is an element of unfairness in a situation where practically all contribute to Social Security, while a few benefit both directly and indirectly but are excused from contributing to the program.” The impact on public employers, employees, and pension plans would depend on how states and localities with noncovered employees would react to mandatory coverage. Many public pension plans currently offer a lower retirement age and higher retirement income benefit than Social Security. For example, many public employees, especially police and firefighters, retire before they are eligible for full Social Security benefits; new plans that include Social Security coverage might provide special supplemental benefits for those who retire before they could receive Social Security benefits. Social Security, on the other hand, offers automatic inflation protection, full benefit portability, and dependent benefits, which are not available in many public pension plans. Costs could increase by as much as 11 percent of payroll for those states and localities, depending on the benefit package of the new plans that would include Social Security coverage. Alternatively, states and localities that wanted to maintain level spending for retirement would likely need to reduce some pension benefits. Additionally, states and localities could require several years to design, legislate, and implement changes to current pension plans. Finally, mandating Social Security coverage for state and local employees could elicit a constitutional challenge. There are no easy answers to the difficulties of equalizing Social Security’s treatment of covered and noncovered workers. Any reductions in GPO or WEP would ultimately come at the expense of other Social Security beneficiaries and taxpayers. Mandating universal coverage would promise the eventual elimination of GPO and WEP but at potentially significant cost to affected state and local governments, and even so GPO and WEP would continue to apply for some years to come, unless they were repealed. Whatever the decision, it will be important to administer all elements of the Social Security program effectively and equitably. GPO and WEP have proven difficult to administer because they depend on complete and accurate reporting of government pension income, which is not currently achieved. The resulting disparities in the application of these two provisions is yet another source of unfairness in the final outcome. We have made recommendations to the Internal Revenue Service to provide for complete and accurate reporting, but it has responded that it lacks the necessary authority from the Congress. We therefore take this opportunity to bring the matter to the Subcommittee’s attention for consideration. To facilitate complete and accurate reporting of government pension income, the Congress should consider giving IRS the authority to collect this information, which could perhaps be accomplished through a simple modification to a single form. Mr. Chairman, this concludes my statement, I would be happy to respond to any questions you or other members of the Subcommittee may have. For information regarding this testimony, please contact Barbara D. Bovbjerg, Director, Education, Workforce, and Income Security Issues, on (202) 512-7215. Individuals who made key contributions to this testimony include Daniel Bertoni and Ken Stockbridge.
Social Security covers about 96 percent of all US workers; the vast majority of the rest are state, local, and federal government employees. While these noncovered workers do not pay Social Security taxes on their government earnings, they may still be eligible for Social Security benefits. This poses difficult issues of fairness, and Social Security has provisions that attempt to address those issues, but critics contend these provisions are themselves often unfair. Congress asked GAO to discuss these provisions as well as the implications of mandatory coverage for public employees. Social Security's provisions regarding public employees are rooted in the fact that about one-fourth of them do not pay Social Security taxes on the earnings from their government jobs, for various historical reasons. Even though noncovered employees may have many years of earnings on which they do not pay Social Security taxes, they can still be eligible for Social Security benefits based on their spouses' or their own earnings in covered employment. To address the issues that arise with noncovered public employees, Social Security has two provisions--the Government Pension Offset (GPO), which affects spouse and survivor benefits, and the Windfall Elimination Provision (WEP), which affects retired worker benefits. Both provisions reduce Social Security benefits for those who receive noncovered pension benefits. Both provisions also depend on having complete and accurate information on receipt of such noncovered pension benefits. However, such information is not available for many state and local pension plans, even though it is for federal pension benefits. As a result, GPO and WEP are not applied consistently for all noncovered pension recipients. In addition to the administrative challenges, these provisions are viewed by some as confusing and unfair, and a number of proposals have been offered to either revise or eliminate GPO and WEP. Such actions, while they may reduce confusion among affected workers, would increase the long-range Social Security trust fund deficit and could create fairness issues for workers who have contributed to Social Security throughout their working lifetimes. Making coverage mandatory has been proposed to help address the program's financing problems, and doing so could ultimately eliminate the need for the GPO and the WEP. According to Social Security actuaries, mandatory coverage would reduce the 75-year actuarial deficit by 10 percent. However, to provide for the same level of retirement income, mandating coverage would increase costs for the state and local governments that would sponsor the plans. Moreover, GPO and WEP would still be needed for many years to come even though they would become obsolete in the long run.
The Congress passed the Communications Satellite Act of 1962 to promote the creation of a global satellite communications system. As a result of this legislation, the United States joined with 84 other nations in establishing the International Telecommunications Satellite Organization—more commonly known as INTELSAT—roughly 10 years later. Each member nation designated a single telecommunications company to represent its country in the management and financing of INTELSAT. These companies were called “signatories” to INTELSAT and were typically government- owned telecommunications companies, such as France Telecom, that provided satellite communications services as well as other domestic communications services. Unlike any of the other nations that originally formed INTELSAT, the United States designated a private company, Comsat Corporation, to serve as its signatory to INTELSAT. The ORBIT Act, enacted by the Congress in March 2000, was designed to promote a competitive global satellite communication services market. The act did so primarily by calling for the privatization of INTELSAT after about three decades of operation as an intergovernmental entity. The ORBIT Act required, for example, that INTELSAT be transformed into a privately held, for-profit corporation with a board of directors that would be largely independent of former INTELSAT signatories. Moreover, the act required that the newly privatized Intelsat retain no privileges or other benefits from governments that had previously owned or controlled it. To ensure that this transformation occurred, the Congress imposed certain restrictions on the granting of licenses that allow Intelsat to provide services within the United States. The Congress coupled the issuance of licenses granted by FCC to INTELSAT’s successful privatization under the ORBIT Act. That is, FCC was told to consider compliance with provisions of the ORBIT Act as it made decisions about licensing Intelsat’s domestic operations in the United States. Moreover, FCC was empowered to restrict any satellite operator’s provision of certain new services from the United States to any country that limited market access exclusively to that satellite operator. When satellite technology first emerged as a vehicle for commercial international communications, deploying a global satellite system was both risky and expensive. Worldwide organizations were considered the best means for providing satellite-based services throughout the world. When INTELSAT was established, the member governments put in place a number of protections to encourage its development. In essence, INTELSAT was created as an international monopoly—with little competition to its international services allowed by other satellite systems, although domestic and other satellite systems were allowed under certain conditions. As such, during the 1970s and early 1980s, INTELSAT was the only wholesale provider of certain types of global satellite communications services such as international telephone calls and relay of television signals internationally. As satellite technology advanced, it became economically more feasible for private companies to develop global satellite systems. This occurred in part because of growing demand for communications services as well as falling costs for satellite system equipment. In particular, some domestic systems that were already in operation expressed interest in expanding into global markets. By the mid-1980s, the United States began encouraging the development of commercial satellite communications systems that would compete with INTELSAT. To do so under the INTELSAT treaty agreements, President Reagan determined that competing international satellite systems were required in the national interest of the United States. After that determination, domestic purchasers of international satellite communications services were allowed to use systems other than INTELSAT. In 1988, PanAmSat was the first commercial company to begin launching satellites in an effort to develop a global satellite system. Within a decade after PanAmSat first entered the market, INTELSAT faced other global satellite competitors. Moreover, intermodal competition emerged during the 1980s and 1990s as fiber optic networks were widely deployed on the ground and underwater to provide international communications services. As competition to INTELSAT grew throughout the 1990s, commercial satellite companies became concerned that INTELSAT enjoyed certain advantages stemming from its intergovernmental status. In particular, the new satellite companies noted that INTELSAT enjoyed immunity from legal liability and was often not taxed in the various countries it served. Additionally, new competitors noted that the signatories to INTELSAT in many countries were typically government-owned telecommunications companies, and many were the regulatory authorities that made decisions on satellite access to their respective domestic markets. As such, new satellite companies were concerned that those entities, because of their ownership stake in INTELSAT as signatories, might favor INTELSAT and thus render entry for other satellite companies more difficult. Because of these concerns, competitors began to argue that the satellite marketplace would not become fully competitive unless INTELSAT became a private company that operated like any other company and no longer enjoyed any advantages. During the same time frame, some of the signatories to INTELSAT came to believe that certain of INTELSAT’s obligations as an intergovernmental entity impeded its own market competitiveness. For example, decision- makers within INTELSAT became concerned that the cumbersome nature of the intergovernmental decision-making process left the company unable to rapidly respond to changing market conditions—a disadvantage in comparison with competing private satellite providers. In 1999, INTELSAT announced its decision to become a private corporation, but to leave in place a residual intergovernmental organization that would monitor the privatized Intelsat’s remaining public service obligations. On July 18, 2001, INTELSAT transferred virtually all of its financial assets and liabilities to a private company called Intelsat, Ltd., a holding company incorporated in Bermuda. Intelsat, Ltd. has several subsidiaries, including a U.S.-incorporated indirect subsidiary called Intelsat LLC. Upon their execution of privatization, INTELSAT signatories received shares of Intelsat, Ltd. in proportion to their investment in the intergovernmental INTELSAT. Two months before the privatization, FCC determined that INTELSAT’s privatization plan was consistent with the requirements of the ORBIT Act for a variety of reasons, including the following: Intelsat, Ltd.’s Shareholders’ Agreement provided sufficient evidence that the company would conduct an initial public offering (IPO). Intelsat, Ltd. no longer enjoyed the legal privileges or immunities of the intergovernmental INTELSAT. Both Intelsat, Ltd. and Intelsat LLC are incorporated in countries that are signatories to the World Trade Organization (WTO) and have laws that secure competition in telecommunications services. Intelsat, Ltd. converted into a stock corporation with a fiduciary board of directors. Measures were taken to ensure that a majority of the members of Intelsat, Ltd.’s Board of Directors were not directors, employees, officers, managers, or representatives of any signatory or former signatory of the intergovernmental INTELSAT. Intelsat, Ltd. and its subsidiaries had only arms-length business relationships with certain other entities that obtained INTELSAT’s assets. In light of these findings, FCC conditionally authorized Intelsat LLC to use its U.S. satellite licenses to provide services within the United States. However, FCC conditioned this authorization on Intelsat, Ltd. conducting an IPO of securities as mandated by the ORBIT Act. In the past year, however, several changes have occurred that alter the circumstances and requirements associated with Intelsat’s IPO. On August 16, 2004, Intelsat, Ltd. announced that its Board of Directors approved the sale of the company to a consortium of four private investors. According to an Intelsat official, this transaction, which was completed on January 28, 2005, eliminates former signatories’ ownership in Intelsat. Additionally, on October 25, 2004, the President signed legislation modifying the requirements for privatization in the ORBIT Act. Specifically, Intelsat, Ltd. may forgo an IPO under certain conditions, including, among other things, certifying to FCC that it has achieved substantial dilution of the aggregate amount of signatory or former signatory financial interest in the company. FCC is still reviewing this transaction to determine whether Intelsat has met the requirements of the ORBIT Act as amended and thus is no longer required to hold an IPO. According to most stakeholders and experts we spoke with, access to non- U.S. satellite markets has generally improved during the past decade, which they generally attribute to global trade agreements and privatization trends. In particular, global satellite companies appear less likely now than they were in the past to encounter government restraints or business practices that limit their ability to provide service in non-U.S. markets. Satellite companies and experts we spoke with generally indicated that access to non-U.S. satellite markets has improved. Additionally, most stakeholders attributed this improved access to global trade agreements that helped to open telecommunications markets around the world, as well as to the trend toward privatization in the global telecommunications industry. At the same time, many stakeholders noted that the ORBIT Act had little to no impact on improving market access. According to several stakeholders, market access was already improving when the ORBIT Act was passed. Despite the general view that market access has improved, some satellite companies and experts expressed concerns that market access issues still exist. These remaining market access problems were attributed to foreign government policies that limit or slow satellite competitors’ access to certain markets. For example: Some companies and experts we spoke with said that some countries have policies that favor domestic satellite providers over other satellite systems and that this can make it difficult for nondomestic companies to provide services in these countries. Some companies and one expert we spoke with said that because some countries carefully control and monitor the content that is provided within their borders, the country’s policies may limit certain satellite companies’ access to their market. Several companies and an expert we interviewed said that many countries have time-consuming or costly approval processes for satellite companies. In addition to these government policies, some stakeholders believe that Intelsat may benefit from legacy business relationships. Since INTELSAT was the dominant provider of global satellite services for approximately 30 years, several stakeholders noted that Intelsat may benefit from the long- term business relationships that were forged over time, as telecommunications companies in many countries may feel comfortable continuing to do business with Intelsat as they have for years. Additionally, two stakeholders noted that because companies have plant and equipment as well as proprietary satellite technology in place to receive satellite services from Intelsat, it might cost a significant amount of money for companies to replace equipment in order to use satellite services from a different provider. Alternatively, representatives of Intelsat, Ltd. told us that Intelsat seeks market access on a transparent and nondiscriminatory basis and that Intelsat has participated with other satellite operators, through various trade organizations, to lobby governments to open their markets. Further, some companies and many of the experts we interviewed told us that, in their view, Intelsat does not have preferential access to non-U.S. satellite markets and that they have no knowledge that Intelsat in any way seeks or accepts exclusive market access arrangements or attempts to block competitors’ access to non-U.S. satellite markets. Finally, some of the companies we spoke with believe that FCC should take a more proactive role in improving access for satellite companies in non-U.S. markets. For example, one satellite company said that section 648 of the ORBIT Act, which prohibits any satellite operator from acquiring or enjoying an exclusive arrangement for service to or from the United States, provides a vehicle for FCC to investigate the status of access for satellite companies to other countries’ markets. Conversely, FCC officials told us they do not believe that FCC should undertake investigations of market access concerns without specific evidence of violations of section 648 of the ORBIT Act. While some comments filed with FCC in proceedings on Intelsat’s licensing and for FCC’s annual report on the ORBIT Act raise concerns about market access, FCC has stated that these filings amount only to general allegations and fall short of alleging any specific statutory violation that would form a basis sufficient to trigger an FCC enforcement action. Mr. Chairman, this concludes my prepared statement. I would be happy to respond to any questions you or other Members of the Subcommittee may have at this time. For questions regarding this testimony and the report on which it is based, please contact JayEtta Z. Hecker at (202) 512-2834 or heckerj@gao.gov, or Mark L. Goldstein at (202) 512-2834 or goldsteinm@gao.gov. Individuals making key contributions to this testimony included Amy Abramowitz, Michael Clements, Emil Friberg, Bert Japikse, Logan Kleier, Richard Seldin, and Juan Tapia-Videla. Tax Policy: Historical Tax Treatment of INTELSAT and Current Tax Rules for Satellite Corporations. GAO-04-994. Washington, D.C.: September 13, 2004. Telecommunications: Intelsat Privatization and the Implementation of the ORBIT Act. GAO-04-891. Washington, D.C.: September 13, 2004. Telecommunications: Competition Issues in International Satellite Communications. GAO/RCED-97-1. Washington, D.C.: October 11, 1996. Telecommunications: Competitive Impact of Restructuring the International Satellite Organizations. GAO/RCED-96-204. Washington, D.C.: July 8, 1996. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
In 2000, the Congress passed the Open-market Reorganization for the Betterment of International Telecommunications Act (ORBIT Act) to help promote a more competitive global satellite services market. The ORBIT Act called for the full privatization of INTELSAT, a former intergovernmental organization that provided international satellite services. In this testimony, GAO discusses (1) the impetus for the privatization of Intelsat as competition developed in the 1990s, (2) the extent to which the privatization steps required by the ORBIT Act have been implemented, and (3) whether access by global satellite companies to non-U.S. markets has improved since the enactment of the ORBIT Act. When commercial satellite technology was first deployed, a worldwide system was seen as the most efficient means to facilitate the advancement of a fully global provider. INTELSAT was thus established as an intergovernmental entity, originally established by 85 nations, that was protected from competition in its provision of global satellite communications services. By the 1980s, however, technology developments enabled private companies to efficiently compete for global communications services, and in 1984, President Reagan determined that it would be in the national interest of the United States for there to be greater competition in this market. New commercial satellite systems emerged, but soon found that INTELSAT enjoyed advantages stemming from its intergovernmental status and ownership by telecommunications companies in other countries that impeded new satellite companies from effectively competing. The new satellite companies began to call for INTESLAT to be privatized. Decision makers within INTELSAT also determined that privatization would enable more rapid business decisions. Just prior to INTELSAT's privatization in July 2001, FCC determined that INTELSAT's privatization plan was consistent with requirements of the ORBIT Act. The Federal Communications Commission (FCC) thus authorized the privatized Intelsat--the official name of the company after privatization--to use its U.S. satellite licenses to provide services within the United States pending an initial public offering (IPO) of securities that was mandated by the ORBIT Act to occur at a later time. New legislation was passed in 2004 that allows Intelsat to forgo an IPO if it has achieved substantial dilution of its "signatory" ownership--that is, dilution of ownership by those entities (mostly government-controlled telecommunications companies) that had been the investors in INTELSAT when it was an intergovernmental entity. Since Intelsat has recently been sold to a consortium of four private investors, it no longer has, according to an Intelsat official, any former signatory ownership. FCC is still reviewing this transaction to determine whether Intelsat has met the requirements of the ORBIT Act as amended and thus is no longer required to hold an IPO. Most of the stakeholders we spoke with said that access to non-U.S. satellite markets has generally improved during the past decade. This improvement in market access is generally attributed to global trade agreements and privatization trends. Despite this general view, some satellite companies expressed concerns that some market access issues still exist. For example, some companies noted that some countries may favor domestic satellite providers or may choose to continue obtaining service from Intelsat because of long-term business relationships that were forged over time. Nevertheless, Intelsat officials noted that it seeks market access on a transparent and nondiscriminatory basis and that Intelsat has participated with other satellite operators, through various trade organizations, to lobby governments to open their markets.
Between 2006 and 2011, Congress initiated two significant efforts to increase public awareness of, and access to, federal spending data: the Federal Funding Accountability and Transparency Act of 2006 (FFATA), and the American Recovery and Reinvestment Act of 2009 (Recovery Act). Both acts mandated the creation of public-access websites, which involved a broad range of data-collection and data-reporting activities, and required OMB and federal agencies, among others, to address multiple levels of accountability and transparency. The passage of FFATA was part of a series of legislative and executive branch efforts to make comprehensive data on federal awards available to the public. Congress passed FFATA in 2006 to increase the transparency of and accountability for the more than $1 trillion in contracts and financial assistance awarded annually by federal agencies. Among other things, the act required OMB to establish a free, publicly accessible web site containing data on federal awards (e.g., contracts, loans, and grants) no later than January 1, 2008. In addition, the act required OMB to include data on subawards by January 1, 2009 and authorized OMB to provide guidance and instruction to agencies to ensure the existence and operation of the website, and required agencies to comply with that guidance. OMB launched the web site—www.USASpending.gov—in December 2007. However, in 2010, we reported that the award data in USAspending.gov was not always complete or reliable. OMB, Open Government Directive, M-10-06 (Washington, D.C.: Dec. 8, 2009). to collect subaward data.the main sources of USAspending.gov data. FPDS-NG is a contract database that is one of In crafting the Recovery Act, Congress and the administration envisioned an unprecedented level of transparency into federal spending data. The act required recipients of Recovery Act funds to submit quarterly reports with information on each project or activity, including the amount and use of funds and an estimate of the number of jobs created and the number of jobs retained. Similar to FFATA, the Recovery Act called for the establishment of a website that would give the public access to information on the many projects and activities funded under the act. The Recovery Accountability and Transparency Board launched the Recovery.gov site in 2009 to fulfill these requirements. In addition, a second site—www.FederalReporting.gov—was established for recipients to report their data. Recipients first reported in early October 2009 on the period from February through September 2009. Reporting has continued for each quarter since then. More recently, in June 2011, the administration issued Executive Order 13576 “Delivering an Efficient, Effective, and Accountable Government.” The order, among other things, established the GAT Board to provide strategic direction for enhancing transparency of federal spending data. The GAT Board was also charged with advancing efforts to detect fraud, waste, and abuse of federal programs. Its 11 members include agency inspectors general, agency chief financial officers, a senior OMB official and other such members as the President shall designate. The GAT Board is mandated to work with the Recovery Board to build on lessons learned from the Recovery Act’s implementation. USAspending.gov and Recovery.gov rely on different sources of information, and make different types of data available to the public. USAspending.gov provides information on federal award obligations, including the recipient’s name, funding agency, amount of award, and descriptive title. It relies primarily on data submitted by federal agencies and, to some extent, by recipients. In addition, agencies use different reporting platforms to submit information about contract and grant awards. In contrast, Recovery.gov, which relies primarily on information submitted by recipients, provides information on federal award expenditures, including information on each project or activity funded, the amount and use of funds, and an estimate of the jobs funded. The USAspending.gov website draws data from different data sources, as shown in figure 2. The Federal Procurement Data System-Next Generation: Procurement data are imported from this system, which collects information on contract actions, procurement trends, and achievement of socioeconomic goals, such as small business participation. OMB was responsible for establishing the system, and GSA administers it. Since 1980, FPDS-NG and its predecessor FPDS have been the primary government-wide databases for contracting information. Federal agencies are responsible for ensuring the information reported in this database is complete and accurate. The Data Submission and Validation Tool: Data on financial assistance awards (grants, loans, loan guarantees, cooperative agreements and other assistance) are provided by federal agencies and are transmitted directly to GSA via this tool. As with FPDS-NG, federal agencies are responsible for ensuring the information reported in this database is complete and accurate. The FFATA Subaward Reporting System: Contractors and grant recipients use this tool to capture and report subaward and executive compensation data regarding their first-tier subawards to meet the FFATA reporting requirements. Contractors and grant recipients are required to file a report within a specified timeframe after making a subaward greater than $25,000 and they are responsible for ensuring information reported to this database is complete and accurate. In contrast to USAspending.gov, Recovery.gov’s data are collected from federal fund recipients. Section 1512 of the Recovery Act requires recovery fund recipients to report quarterly on Recovery Act-related spending. Recipients provide their information to the agency through FederalReporting.gov. Agencies, then, review the data provided. The validated data are then published on Recovery.gov, as illustrated by figure 3. The GAT Board’s role is to provide strategic direction for enhancing federal spending transparency. Along with OMB and the Recovery Board, it oversees several ongoing government-wide initiatives designed to expand the transparency of federal spending data. As part of its role to provide strategic direction, the GAT Board established four work groups in 2012 and 2013 (see figure 4). These groups are charged with developing approaches for improving transparency across three functional areas—procurement, grants, and financial management—and expanding data availability to improve spending oversight. Work group members represent the federal procurement, grants, financial management, and oversight communities. The members are set up to leverage the collective expertise of several interagency forums. Procurement Data Standardization and Integrity Working Group: The work group was established to identify approaches for standardizing contract data elements and electronic transactions to ensure data are accurate and contract transactions can be tracked from purchase order through vendor payment. As the federal government’s largest contracting agency, DOD is a lead agency on this work group, along with members of OMB’s Office of Federal Procurement Policy (OFPP). The Procurement Work Group initiative grew out of OMB’s and DOD’s long-standing efforts to improve the accuracy of contract data that agencies submit to FPDS-NG. Work group members also include representatives from the Chief Acquisition Officers Council, an interagency forum of agency acquisition officers. Grants Data Standardization and Integrity Working Group: The work group has been tasked with developing approaches to standardize grants data elements to achieve greater consistency across the federal government. HHS, along with OMB’s Office of Federal Financial Management (OFFM), provides leadership to this work group. Members also include representatives from the newly established Council on Financial Assistance Reform (COFAR). According to HHS officials, the group’s efforts build on the agency’s prior work with the Grants Policy Council and the Grants Executive Board to standardize and streamline grant procedures. Financial Management Integration and Data Integrity Working Group: The GAT Board, in conjunction with Treasury, is examining approaches for linking the financial management data maintained in agency financial systems with agency awards data in order to improve the quality of data displayed to the public. The GAT Board established this work group to align with Treasury’s ongoing efforts to define their data vision and approach, including a proposal to move the administrative responsibility for USAspending.gov from the GSA to Treasury. While the GAT Board is responsible for setting direction and developing strategy, the Board is leveraging Treasury’s on-going modernization efforts with assistance from OFFM. Data Analytics Working Group: The work group was formed in response to the executive order establishing the GAT Board, which required the board to advance efforts to detect and remediate fraud, waste, and abuse in federal programs. The group is under the direction of the Inspector General of the United States Postal Service and has representatives from the Recovery Board. The group also provides information about its activities with the Council of Inspectors General for Integrity and Efficiency. The working group’s goal is to expand on the Recovery Board’s Recovery Operation Center (ROC) for improving fraud detection in federal spending. The GAT Board, through its working groups, is in the process of determining approaches for carrying out its mission. However, its mandate is only to develop strategy, not to implement it. The GAT Board relies on the working groups’ lead agencies to develop recommendations and implement approaches that it has approved. Moreover, with no dedicated funding, the GAT Board’s strategic plan is short-term and calls for an incremental approach that builds upon ongoing agency initiatives. These initiatives include efforts to modernize systems or improve agency management, designed to improve existing business processes as well as improve data transparency. The GAT Board’s initial plans largely focus on efforts at the federal level and some progress has been made to bring greater consistency to award identifiers. Data standardization and a uniform convention for identifying contract and grant awards throughout their life cycle are the first steps in ensuring data quality and tracking spending data. The GAT Board’s December 2011 Report to the President notes that introducing greater consistency into the award process will help better reconcile spending information from multiple sources and allow for more effective analysis and oversight. Currently, efforts under way are aimed at introducing more consistency into the way federal spending data are reported, collected, and publically displayed. Initial efforts are focused on identifying approaches to standardize contract and grant data elements. These efforts are intended to improve the accuracy of spending data, link award data to payment data to help track awards throughout the life cycle, and advance efforts to detect and remediate fraud, waste, and abuse. While these efforts are largely in the early stages of development, progress has been made to establish more uniform award identifiers, and to test the plausibility of using data standards and a centralized data-collection portal to minimize the burden of federal fund recipients. The members of the FAR Council jointly issue and maintain a single government-wide procurement regulation, known as FAR. The FAR Council’s membership consists of the OFPP Administrator for Federal Procurement Policy, the Secretary of Defense, the Administrator of the National Aeronautics and Space Administration, and the GSA Administrator. The Council manages, coordinates, controls, and monitors the maintenance of, issuance of, and changes in, FAR. 41 U.S.C. §§ 1301–1304. The FAR Council periodically publishes rules implementing changes to the FAR. A final rule is typically preceded by a proposed rule, published in the Federal Register and seeking public comments. for all contracting offices.across various systems and across its life cycle. This would enable a contract to be tracked OMB, in consultation with the GAT Board, has issued new guidance that requires all federal agencies to establish unique identification numbers for financial assistance awards. It also mandates agencies to check the accuracy of spending information against an official record of agency accounts. grant award data, it only requires agencies to assign award numbers unique within their agency to grant transactions. Thus, the guidance does not provide the same level of uniformity as is required for contracts nor does it provide uniformity across all contract and financial assistance spending. Agencies and even subunits of agencies use inconsistent award-numbering systems. These respective systems are created to conform to their own internal agency management systems to identify contracts, grants, and loans. In many agencies, there is no direct link or continuous use of one standard award identifier between systems and offices. The disparate award identification systems and naming conventions used by agencies today make the task of reporting and tracking spending data inefficient and burdensome. Recovery Board officials raised some concerns, noting that a lack of uniform standards for identifying grants would make it difficult to pre-populate recipient reports with information from the awarding agency, and reconcile obligation with award data. While this lack of uniformity may not optimize the use of the data, OMB has noted that, in combination with other information provided, it will uniquely identify a given grant. Further, in their oral comments on our draft report, an OMB staff member told us that standardizing an identifier format could cause problems for agency systems because some agencies structure their award identifiers to track particular characteristics of grants for their internal use. Therefore, OMB has issued this guidance and then will evaluate the improvements in light of the added resources needed to implement them. The June 12, 2013, memorandum, “Improving Data Quality for USAspending.gov,” requires all federal agencies to (1) assign financial assistance award identification numbers unique within the Federal agency; and (2) identify and implement a process to compare and validate USAspending.gov funding information with data in the agency’s financial system. As part of its work with the Grants Data Standardization and Integrity Working Group, HHS recently completed a preliminary analysis to determine the degree to which grants data elements are standardized across the federal government. According to the chair of the GAT Board, it is currently more challenging to standardize grants data elements because, unlike FAR’s uniform procurement regulations, there is no single set of grant regulations in use across the federal government. The Recovery Accountability and Transparency Board recently concluded a proof-of-concept project that tests the feasibility of using FederalReporting.gov, to collect data on non-Recovery Act grant expenditures. In a pilot involving nine grant recipients and two federal agencies, the Grant Reporting Information Project (GRIP) captured data elements from OMB’s standardized grant expenditure reporting form Standard Form 425, as well as subrecipient and vendor expense data. GRIP also tested whether such a system could lessen reporting burden and improve the accuracy of the data submitted by fund recipients. In addition, the pilot tested whether the use of a universal award identification number could be used to track grant expenditures throughout the grant life cycle. The Recovery Board’s analysis of the GRIP project found that feedback from the pilot participants supported using FederalReporting.gov for grant reporting. The analysis also validated the effectiveness of using a universal award identifier. In addition, the board’s analysis found that, while such features as machine- readable formats and pre-populated data fields helped the reporting experience, due to the pilot’s short duration, GRIP did not fully demonstrate that it could reduce the burden on recipients. Similarly, the Federal Demonstration Partnership, whose member universities participated in the GRIP pilot, issued a report. It found that, while using a standard schema increases reporting efficiency, and pre-populating data can enhance reporting and verify accuracy, at least initially, the pilot did not reduce the burden on recipients. Information about federal government spending is collected in a complex web of systems and processes that are both overlapping and fragmented. While having standardized data and award identifiers is an important first step to effectively track spending, federal entities also have begun to examine ways to consolidate and streamline data systems that are overlapping or duplicative. The Financial Management Integration and Data Display Working Group is developing recommendations for a work plan that will leverage Treasury’s on-going transparency and system modernization efforts. First, building on Treasury’s initiative to standardize payment transaction processes, the Payment Application Modernization project will consolidate more than 30 agency payment systems into a single application. This application will process agency payment requests using Treasury’s Standard Payment Request format. All federal agencies that use Treasury disbursing services (Treasury disbursing organizations) will be directed to submit payment data into the newly developed standard format by October 1, 2014. A Treasury official said that federal agencies representing about 142 of 437 agency location codes had either converted to the new format, were testing the new format, or had set a schedule when they would implement the new payment request format. Despite the finding, this official expressed doubt about the ability of some agencies that do not use Treasury for disbursing payments to comply with the data standards by the deadline. The official did note that Treasury continues to provide assistance to these agencies. Second, the Financial Management Integration and Data Display Working Group is also building on Treasury’s initiative to develop a centralized repository containing detailed and summarized records of payment transactions from all federal agencies. The Payment Information Repository will contain data on all payments disbursed by Treasury plus those reported by the Federal agencies that disburse their own payments. This repository will contain descriptive data on those payments for which matching with other data sources (e.g. accounting data, grant data, commercial vendor data, geographic data, etc.) will provide additional information regarding the purpose, program, location and commercial recipient of the payment. A number of government oversight and law enforcement agencies are using data analytics—which involve a variety of techniques to analyze and interpret data to facilitate decision making—to help identify and reduce fraud, waste, and abuse. Data mining applications are emerging as essential tools to inform management decisions, develop government- wide best practices and common solutions, and effectively detect and combat fraud in large programs and investments. For example, predictive analytic technologies can identify fraud and errors before payments are made. Others, such as data-mining and data-matching techniques, can identify fraud or improper payments that have already been awarded. Thus, agencies have help in recovering these dollars. Data mining applications are emerging as essential tools to inform management decisions as well. According to GAT Board officials, making more data available, and doing so in real time, will help agencies make better informed decisions about how they manage federal funds. The Recovery Board’s ROC, established in 2009, uses data analytics to monitor Recovery Act spending to detect and prevent the fraudulent use of funds made available under the act. As part of this effort, ROC analysts use a set of tools that can search large amounts of data from multiple sources. This process is designed to look for patterns and anomalies that could indicate the existence of fraud. The Board has provided several inspectors general with access to these tools through www.FederalAccountability.gov. This site allows inspectors general to review and evaluate entities, such as individuals, companies, and universities, who have received Recovery Act funds. In some cases, ROC staff was able to notify some agencies that they had awarded Recovery funds to companies that were debarred. Thus, these companies should not have received federal funds. For example, ROC analysts found hidden assets that resulted in a court ordering the payment of a fine. They also found several individuals employed by other entities while receiving worker’s compensation benefits. The GAT Board’s Data Analytics Working Group has set a goal of expanding on the ROC’s work to develop a shared platform for improving fraud detection in federal spending programs. This approach relies on the development of data standards. It will provide a set of analytic tools for fraud detection to be shared across the federal government. Although this work is just starting, working group members have identified several challenges to developing and implementing a shared platform and analytic tools for fraud detection. These challenges include reaching consensus among federal agencies on a set of common data attributes to be used, and changes needed to existing privacy laws to allow access to certain types of protected data and systems. In January 2013, we convened a forum on data analytics in conjunction with the Council of the Inspectors General on Integrity and Efficiency and the Recovery Board. Its purpose was to explore ways in which oversight and law enforcement agencies use data analytics to detect and prevent fraud, waste, and abuse, and to identify the most significant challenges to realizing the potential of data analytics. Forum participants identified a range of challenges, including technical and legal challenges currently experienced by oversight and law enforcement agencies. In particular, participants highlighted challenges to expanding data sharing within the federal government, including requirements of the Computer Matching and Privacy Protection Act of 1988, as amended, that hindered fraud detection efforts and a lack of data standards and a universal award identifier that limit data sharing across the federal government and across federal, state, and local agencies. Participants also identified opportunities to enhance data-analytics efforts. These opportunities included consolidating data and analytics operations in one location to increase efficiencies by enabling the pooling of resources as well as accessing and sharing of the data. The GAT Board’s mandated responsibilities include working with the Recovery Board to build on lessons learned and applying approaches developed by the Recovery Board to new efforts to enhance the transparency of federal spending. As discussed above, the GAT Board, the Recovery Board, OMB, and other federal agencies have initiatives under way to improve federal spending transparency. These initiatives include efforts to standardize data and consolidate data systems to improve the accuracy of federal spending and expand oversight of these funds. In many cases these initiatives build on lessons learned from the operation of existing transparency systems, including Recovery.gov and USAspending.gov. However, as new transparency initiatives get under way, opportunities exist to give additional consideration to these lessons to help ensure new transparency programs and policies are implemented successfully. One of the key lessons learned from the implementation of the Recovery Act’s transparency provisions, was the value of standardized data, including a uniform award identification number for contracts, grants, loans, and other forms of financial assistance. The transparency envisioned under the Recovery Act for tracking spending was unprecedented for the federal government, requiring the development of a system that could track billions of dollars disbursed to thousands of recipients. The system also needed to be operational quickly for a variety of programs, across which even the basic question of what constituted a program or project differed. While agencies had systems that captured such information as award amounts, funds disbursed, and, to varying degrees, progress being made by recipients, the lack of uniform federal data and reporting standards made it difficult to obtain these data from federal agencies. Instead data were collected directly from recipients, which placed additional burden on them to provide these data. As it developed procedures for reporting on the use of federal funds, OMB directed recipients of covered funds to use a series of standardized data elements. Further, rather than report to multiple government entities, each with its own disparate reporting requirements, all recipients of Recovery funds were required to report centrally into the Recovery Board’s inbound reporting website, FederalReporting.gov. According to the GAT Board’s 2011 report to the President, the Recovery Board’s method for collecting consistent recipient data on spending and posting it rapidly was effective and significantly increased the speed and quality of the spending data reported. The availability of standardized data also allowed the Recovery Board to use data analytics and predictive analysis to detect, prevent, and remediate the fraudulent use of Recovery Act funds. The Recovery Board reported that as a result, the board’s analysts were able to find multiple tax liens, regulatory violations, and suspicious financial activity for several companies under investigation by an inspector general. They also were able to notify a number of agencies that they had awarded Recovery funds to companies that were debarred and therefore should not have received federal funds. Initially, the ROC was deployed to detect and prevent fraud under the Recovery Act. In 2012, Congress provided the board the authority to test processes and technologies for monitoring federal spending. As a result, the board— while continuing to maintain its Recovery Act fraud-prevention efforts— has expanded its joint efforts with inspectors general and law enforcement agencies. As discussed above, the GAT Board had previously identified data standardization, including moving agencies toward a universal, standardized identification system for all federal awards, as a critical step for increased transparency. However, the degree to which data will be standardized across the federal government is still the subject of some debate among Board members. The recent OMB guidance requiring a unique, but not uniform, grant identifier will result in a less standardized approach for grants than contracts. Further, citing agency budgetary constraints and the potential of emerging technologies for extracting non- standard data elements from disparate systems, GAT Board members are taking incremental steps toward increasing data standardization. For example, as the lead agency for the GAT Board’s Grants Data Standardization and Integrity Working Group, OMB asked HHS to analyze the existing level of standardization among grant making agencies, and assess the feasibility and cost of increasing data standards. This analysis examined more than 1,110 individual data elements from more than 17 different sources. It found that there was widespread variation in terminology and associated definitions that impacted how spending was captured, tracked, and reported. In addition, through its work with the GAT Board, Treasury is assessing the potential for implementing new technologies that would allow non-standardized data to be accessed by tagging and linking it to source systems, rather than collecting and warehousing data in a separate system. A lack of uniform standards could also increase the burden on federal fund recipients. Federal fund recipients with whom we spoke told us that the lack of consistent data standards and commonality in how data elements are defined places undue burden on fund recipients because it can result in them having to report the same information multiple times via disparate reporting platforms. Fund recipients also told us that lack of consistent data standards can impact the accuracy of data reported. For example, one higher education official noted that increasing data standardization and reporting consistency across the federal government would eliminate the need for “human intervention” or manual data entry which can impact the accuracy and the timeliness of the data reported. Moreover, collecting data that already exists in agency award systems is also inefficient and burdensome to recipients. Federal fund recipients we spoke to expressed concern about the number of disparate agency and program-specific requirements that obligate them to report the same data multiple times or to report data that should have come from federal sources. These recipients offered a number of suggestions for minimizing reporting redundancy, including limiting data collected from recipients to a small number of essential elements that can only be obtained from recipients, pre-populating electronic reporting forms with data available from agency sources, and using data multiple times rather than requiring that recipients report the same data multiple times. Another key lesson learned from the implementation of Recovery Act reporting requirements was the importance of obtaining and considering the input of stakeholders—federal agencies, recipients, and subrecipients—early in the development of both the reporting system and its procedures. Given the daunting task of rapidly establishing a system to track billions of dollars in Recovery Act funding, OMB and the Recovery Board implemented an iterative process. This process allowed many stakeholders to provide insight into the challenges that could impede their ability to report Recovery Act expenditures. Throughout the development of guidance and in the early months of implementing recipient reporting provisions, OMB and the Recovery Board provided several opportunities for two-way communication with recipients. For example, OMB and the Recovery Board held weekly conference calls with state and local representatives to hear their comments and suggestions, and address their concerns. As a result of these efforts, federal officials changed their plans and related guidance. For example, initial guidance in February 2009 began to lay out information that would be reported on Recovery.gov, and the steps needed to meet reporting requirements, such as including recipient reporting requirements in grant awards and contracts. In response to requests for more clarity, OMB, with input from an array of stakeholders, issued more guidance in June 2009. The guidance clarified requirements on reporting jobs, such as which recipients were required to report, and how to calculate jobs created and retained. During this current phase of developing transparency efforts, OMB and the GAT Board have implemented a structure to obtain input from a variety of federal stakeholders representing the procurement, grants, and financial management communities. However, mechanisms for obtaining input from non-federal stakeholders are limited to the public rule-making process. The GAT Board’s work groups consist of representatives from select federal agencies, OMB, and interagency forums. The work groups are designed to leverage the expertise of federal officials with in-depth knowledge of federal procurement, grant-making, and financial management operations. The Board does not have any formal mechanisms, other than the federal rule-making process, to obtain input from federal fund recipients. An OMB official told us that OMB is leveraging pre-existing personal contacts made during the Recovery Act to obtain feedback from state officials. Further, this official said that OMB had conducted extensive outreach with non-federal stakeholders in seeking their input on OMB’s grants reform proposal. These outreach efforts included discussions on standardizing financial information collected during the pre-award and post-award phases of the grant process. However, state officials we spoke with expressed interest in providing additional input into expanding reporting requirements through more formal mechanisms, such as focus or advisory groups. Without a systematic approach for receiving and processing recipients’ input, such as the conference calls held for the Recovery Act, issues that could affect recipients’ ability to meet new reporting requirements could go unaddressed, compromising the ability of recipients to provide accurate data. Non-federal stakeholders have been involved in the limited GRIP pilot project discussed above. Recovery Board officials sought feedback from the participating states, a locality, and institutions of higher education throughout the duration of the project through a series of webinars and teleconferences. An online help desk was also established to assist the recipients through the process. At the conclusion of the study, participants were surveyed. They expressed approval for several of the project’s aspects, including data standardization, the inclusion of an error-checking feature, and the use of a single central portal for reporting expenditures. Although a Recovery Board official told us that they gained valuable insight from stakeholders through the pilot, they reported that the more inclusive networked community of state and local officials that they established during the Recovery Act implementation had not been sustained. Federal fund recipients we spoke with underscored the importance of the maintaining the connections they established with federal officials during the implementation of the Recovery Act. They also stressed the importance of having a formal mechanism to provide feedback to the federal government as guidance is crafted and before new transparency reporting requirements are established. Federal fund recipients said that they need clear and understandable guidance to ensure that the data they report are accurate, on time, and minimally burdensome. Officials from organizations representing fund recipients as well as the fund recipients themselves told us that the interactions between OMB and fund recipients during Recovery Act implementation were extremely effective. They noted the frequent communication with OMB staff members who listened to their concerns, addressed questions, and made adjustments to guidance, made it easier for them to report accurate spending data. Need for Clear Guidance “Guidelines need to be clear and uncomplicated so that people can follow them without having to refer to multiple different sources. You will not get the same level of compliance and enthusiasm and you will produce some degree of frustration if people do not understand the requirements and have limited resources to work with.”— An official from an association representing federal fund recipients. Under the Recovery Act, specific requirements and responsibilities for transparency were clearly laid out in the law, which helped to ensure that transparency requirements were implemented within tight time frames, and thereby provided unprecedented transparency. The Recovery Act specified the timing of reporting, including its frequency and deadlines, and the items that needed to be included in the reporting. The Recovery Board reported that the concrete deadlines imposed by the Recovery Act motivated OMB and the Recovery Board to take action. The Recovery Act required the Recovery Board to conduct and coordinate oversight of the funds provided under the Recovery Act to prevent waste, fraud and abuse, which the Recovery Board accomplished, acting together with OMB, at the federal level to implement the transparency requirements. To implement the recipient reporting requirements, OMB worked with the Recovery Board to deploy a data-collection system at FederalReporting.gov and a public-facing website at Recovery.gov. Further, OMB provided centralized guidance that defined the reporting requirements and the agencies’ role in ensuring the quality of data recipients provided. An official from one association representing recipients commented that having information come from one centralized agency, such as OMB, helped assure recipients that their questions were addressed correct. The Recovery Act also provided funding for the Recovery Board, which was used to provide staff and resources for developing and operating its data collection system, website, and data analytic activities. In contrast, authority for implementing the current transparency initiatives is not as clearly defined. Authority for expanding transparency is centered in an executive order rather than legislation. As we have previously reported, given the importance of leadership to any collaborative effort, transitions and inconsistent leadership, which can occur as administrations change, can weaken the effectiveness of any collaborative efforts, and result in a lack of continuity. According to chairman of the GAT Board, the Board’s vision for comprehensive transparency reform will take several years to implement. Therefore, continuity of leadership becomes particularly important. Going forward, changes in the administration and GAT Board membership could hamper the success of future reform efforts if requirements and authorities for implementing reforms are not clearly defined in statute. Moreover, the executive order that establishes the GAT Board provides it with a role of setting strategic direction, but not for implementation. As we have previously reported, interagency collaboration on a project, such as expanding transparency, is facilitated when one agency is designated to be accountable, and there are clear roles and responsibilities. This centralizes accountability and can speed decision making in an organization. While there are many officials working together—the GAT Board, work groups led by agency officials, interagency forums such as the Chief Acquisition Officers Council, Council on Financial Assistance Reform and Council of the Inspectors General on Integrity and Efficiency, and OMB—it is not clear where responsibility for implementing the initiatives lies. In oral comments provided on our draft report, OMB staff said that the administration, through its fiscal year 2014 budget proposal, has taken steps to delineate authority by seeking $5.5 million for Treasury to operate and improve the USAspending.gov web site. They believe that this proposal will establish Treasury as single implementing entity for operationalizing transparency reforms. The lack of clearly delineated authority for implementing initiatives could result in multiple projects working at cross-purposes, overlapping, or missing opportunities to improve transparency consistently. The following represents examples that we gleaned from our interviews and focus groups: The GAT Board’s Data Analytics work group has been examining approaches to expanding the availability of data to help the oversight community detect and prevent fraud, waste, and abuse in federal programs. Similarly, the Recovery Board’s ROC has developed a set of assessment tools that can search large amounts of data from multiple sources to detect fraudulent spending in federal programs. Although the GAT Board’s mandate includes a requirement for it to work with the Recovery Board to apply the approaches developed by the latter across the federal government, the extent to which the work of the ROC is being incorporated into the GAT Board’s effort to develop approaches for using data analytics to improve oversight is unclear. The GAT Board and the Recovery Board have similar projects under way to standardize grants data elements and procedures. The GAT Board’s Grants Data Standardization and Integrity Working Group is identifying approaches to standardize key data elements to improve the accuracy of grants award data. The Recovery Board’s GRIP pilot examined whether a uniform award identification number could be used to track grants expenditures throughout the grant life cycle. However, it is unclear whether the study’s results will be incorporated into the work of the Grants Data Standardization and Integrity Working Group, which could lead to inefficiencies caused by duplicated efforts. Moreover, without operational authority, the GAT Board must leverage the authorities and networks of its individual members. Thus, the successful implementation of transparency initiatives depends on the willingness and capacity of individual members’ agencies to drive this change, and may not be sustainable. For example, the GAT Board chairman used his position on the FAR Council to have it consider and vote on a proposed rule that will require all federal agencies to use a uniform procurement identification number for all of its solicitations and awards. The GAT Board chairman drove this initiative based on his capacity as Director of the Defense Procurement and Acquisition Policy Office at DOD. However, the extent to which this governance structure will be effective or sustainable over time is limited to and dependent upon those existing connections. Unlike under the Recovery Act, these transparency initiatives are being funded through existing agency resources using agency personnel, as separate funding is unavailable. While the GAT Board lacks funding of its own, it relies on agencies to develop approaches to improve data transparency. Agency officials we spoke with said they expect that automation and standardization mechanisms embedded in transparency initiatives now under way could help federal agencies to more efficiently and effectively manage their activities and programs. Efficiencies and economies generated by these initiatives might have the potential to save money, and thereby lessen the need for appropriations or other forms of dedicated funding. In the short term, the GAT Board believes it can continue to make incremental changes by leveraging ongoing agency initiatives. Ensuring the transparency of more than $3.7 trillion in federal spending annually, including more than $1 trillion awarded through contracts, grants, and loans, and an additional $1 trillion in forgone revenue from tax expenditures is an important national goal. Efforts to improve transparency of spending data continue to involve multiple federal entities, under the strategic leadership of the GAT Board and OMB. Meanwhile, the Recovery Board continues to play a role in evaluating new approaches to collecting data and maintaining systems to use the data for ensuring accountability. However, roles and responsibilities for the effort are not clearly delineated. For example, under the Recovery Act, authority to mandate requirements was centered in OMB and the Recovery Board, and was clearly delineated and funded, whereas under the current transparency initiatives, the leadership for implementing actions is spread across several agencies, knitted together loosely under the GAT Board’s strategic direction. Given the importance of clear requirements and consistent leadership for ensuring recommended approaches are institutionalized across the federal government and progress is sustained over the long-term, the present governance structure for transparency efforts could hamper planned advancements. Having clear requirements and implementation authority, particularly through legislation, will help ensure effective and sustained implementation of transparency efforts across the federal government. While the transparency initiatives under way represent a promising start, insights could be gleaned from lessons learned from the operation of Recovery.gov. Such insights would ensure that new approaches are implemented consistently across the federal government, and progress toward strategic goals can be sustained long term. As OMB and the GAT Board take incremental steps to improve data transparency and expand oversight of federal spending, it will be important to develop a long-term vision and concrete plan for improving transparency, and ensure its implementation. For example, a key lesson learned from the implementation of the Recovery Act was the importance of data standards, including a universal award identifier to enable the tracking of Recovery Act spending, and a uniform numbering system for identifying federal awards would improve the tracking of all federal spending. Moreover, by listening to stakeholders during Recovery Act implementation, OMB and the Recovery Board heard concerns and made changes to plans and guidance accordingly. As new transparency initiatives are developed, the input of all stakeholders, including nonfederal entities such as state and local governments, would help OMB and the GAT Board to identify approaches that minimize the burden on those doing the reporting, and address reporting challenges. Although the non-federal stakeholders are a broad and diverse group, as future changes are considered, it will be important to identify mechanisms to involve stakeholders as early as possible. We recommend that the Director of OMB, in collaboration with the members of the GAT Board, take the following two actions: Develop a plan to implement comprehensive transparency reform, including a long-term timeline and requirements for data standards, such as establishing a uniform award identification system across the federal government. Increase efforts for obtaining input from stakeholders, including entities receiving federal funds, to address reporting challenges, and strike an appropriate balance that ensures the accuracy of the data without unduly increasing the burden on those doing the reporting. To ensure effective decision making and the efficient use of resources dedicated to enhancing the transparency of federal spending data, Congress should consider legislating transparency requirements and establish clear lines of authority to ensure that recommended approaches for improving spending data transparency are implemented across the federal government. We provided a draft of this report to the Director of OMB, the GAT Board and the Recovery Board Chairs, the Administrator of GSA, and the Secretaries of HHS, DOD, and Treasury for review and comment. OMB provided oral comments, which are summarized below. HHS provided general comments that are also discussed below. In addition, the GAT Board and DOD concurred with our recommendations and provided technical comments, as well as additional clarifying information related to the recommendations. Treasury generally agreed with our findings and provided technical comments, while the Recovery Board and GSA provided technical comments only. In its oral comments, OMB staff indicated that they generally concurred with our findings and recommendations. Regarding our recommendation for developing a long-term plan for implementing data standards, OMB staff agreed that the GAT Board’s plan provides an initial strategy and added that multiple initiatives are under way. One of these initiatives is the administration’s fiscal year 2014 budget proposal that would operationalize comprehensive transparency through the transfer of USAspending.gov from GSA to Treasury. We have provided additional information on this in the report. For our recommendation on increasing efforts to obtain stakeholder input as transparency initiatives are developed, OMB staff agreed that increasing efforts to obtain stakeholder input was important and pointed to their outreach to date particularly in seeking stakeholder comments on the grants reform process, which they said included discussions on standardizing financial information collected from recipients during the pre-award and post-award phases of the grant process. We have provided additional information in this report on this as well. We continue to believe, however, that as specific proposals for transparency initiatives are being developed, additional mechanisms need to be in place to provide two-way communication to ensure that reporting challenges are addressed without unduly increasing reporting burden. OMB generally agreed with our matter for congressional consideration on legislating transparency requirements, but noted that the Congress has provided a robust statutory framework through legislation, such as FFATA and the GPRA Modernization Act of 2010, and therefore additional legislation is unnecessary. However, as we have previously concluded, given the importance of clear requirements and consistent leadership for ensuring approaches are institutionalized and sustained over the long term, legislation will help ensure effective implementation of comprehensive transparency reform. The comments submitted by the HHS Assistant Secretary for Legislation stressed the need to recognize the impact of data standardization on agency operations and resources, and noted that the overarching federal vision for transparency must articulate both the long term goals, as well as more operational and pragmatic steps to be taken in order to achieve such goals. HHS also underscored the importance of coming to an agreement on the range of federal spending information that is needed to achieve transparency and on which data elements are mandatory for reporting and for information collection requirements. We are sending copies of this report to the appropriate congressional committees and the Recovery Board and GAT Board Chairs, the Director of OMB, the Secretaries of HHS, DOD, and Treasury, and the Administrator of GSA. If you or your staff has any questions concerning this report, please contact me at (202) 512-6806 or czerwinskis@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors are listed in appendix IV. We were asked to examine efforts to increase the transparency of federal spending data, and identify lessons from the experience of operating existing data systems that could help increase federal spending data transparency. To accomplish this, we (1) reviewed federal initiatives to improve the accuracy and availability of federal spending data; and (2) assessed the extent to which lessons identified by GAO and federal fund recipients from the operation of Recovery.gov and USAspending.gov are being addressed by these new transparency initiatives. To address these objectives, we examined data collection and reporting requirements under FFATA and the Recovery Act, the June 2011 Executive Order that established the GAT Board, Executive Order 13,576, “Delivering an Efficient, Effective, and Accountable Government,” relevant OMB guidance and memoranda, such as OMB-10-06, Open Government Directive and Improving Data Quality for USAspending.gov, and reports outlining action plans and recommendations created by the GAT Board, the Recovery Board, and other federal entities charged with developing approaches to improve federal data transparency. We interviewed officials at OMB, the GAT Board, and the Recovery Board who are examining new data transparency approaches. We also interviewed officials at three agencies who represent the perspectives of the federal procurement, grant, and financial management communities, and who are working with the GAT Board to build on transparency initiatives under way within their agencies that could be applied across the federal government: DOD, HHS, and Treasury, respectively. We also interviewed officials at GSA, the agency that manages USAspending.gov, to gain their perspectives on the challenges associated with ensuring the quality of the data submitted to this site. To capture the perspective of the federal fund recipients, we conducted a series of interviews with officials from organizations representing federal fund recipients and government reform organizations. We wanted to get their perspectives on lessons learned from the operation of existing transparency systems, and federal efforts under way to improve data transparency. We selected these associations because our preliminary research indicated that they had either been involved in Recovery Act implementation, had published reports related to the Recovery Act, had expressed official positions on existing transparency systems, or had submitted official statements on pending legislation designed to improve transparency. To capture a wide range of recipient perspectives, we also selected associations that represented a variety of recipient types, from state and local governments, nonprofit organizations, and contractors. For this review, we interviewed and collected comments from officials at the following organizations: National Association of State Auditors, Comptrollers, and Treasurers National Association of State Budget Officers National Association of Counties Federal Demonstration Partnership National Council of Nonprofits National Association of State Chief Information Officers Council on Government Relations Professional Services Council Center for Effective Government Project on Government Oversight Sunlight Foundation We also conducted seven focus groups representing a range of federal fund recipients. Focus groups included: (1) state comptrollers and budget officials; (2) state education and transportation officials; (3) local government officials from both large and small municipalities; and (4) nonprofit organizations, research universities, and representatives from business who contract with the federal government. Each focus group had between four and eight participants who were recruited from randomized member lists provided by the recipient associations we interviewed. We audio-recorded the focus groups, transcribed the recordings, and analyzed the findings with qualitative analysis software for common themes and pattern. Finally, we reviewed our previous work on the reporting successes and challenges experienced by both agencies and federal fund recipients. This process was designed to identify lessons learned from those experiences that should be considered as new approaches to data transparency are developed. We conducted this performance audit from November 2012 to September 2013 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. The Government Accountability and Transparency Board (GAT Board) is composed of the following 11 members designated by the President from among agency Inspectors General, agency Chief Financial Officers or Deputy Secretaries, and senior officials from OMB. The President designates a Chairman from among the members. Director, Defense Procurement and Acquisition Policy, U.S. Assistant Secretary, Department of the Treasury Deputy Secretary, U.S. Department of Veterans Affairs Assistant Secretary for Financial Resources and Chief Financial Department of Defense Inspector General, U.S. Postal Service Inspector General, U.S. Department of Energy Inspector General, National Science Foundation Inspector General, U.S. Department of Health and Human Services Deputy Controller, Office of Management and Budget Officer, U.S. Department of Health and Human Services Inspector General, U.S. Department of Transportation Inspector General, U.S. Department of Education The GAT Board established four working groups, as shown in table 1. In addition to the contact named above, Carol L. Patey, Assistant Director, and Kathleen M. Drennan, Ph.D., Analyst-in-Charge, supervised the development of this report. Gerard S. Burke and Patricia Norris made significant contributions to all aspects of this report. Cynthia M. Saunders, Ph.D. assisted with the design and methodology, Andrew J. Stephens provided legal counsel, Robert Robinson developed the report’s graphics, and Jessica Nierenberg, Judith Kordahl, and Keith O’Brien verified the information in this report. Other important contributors included James R. Sweetman, Jr., Michael S. LaForge, William T. Woods, and Tatiana Winger.
The federal government spends more than $3.7 trillion annually, with more than $1 trillion awarded through contracts, grants, and loans. Improving transparency of spending is essential to improve accountability. Recent federal laws have required increased public information on federal awards and spending. GAO was asked to review current efforts to improve transparency. This report examines (1) transparency efforts under way and (2) the extent to which new initiatives address lessons learned from the Recovery Act. GAO reviewed relevant legislation, executive orders, OMB circulars and guidance, and previous GAO work, including work on Recovery Act reporting. GAO also interviewed officials from OMB, the GAT Board, and other federal entities; government reform advocates; associations representing fund recipients; and a variety of contract and grant recipients. Several federal entities, including the Government Accountability and Transparency Board (GAT Board), the Recovery Accountability and Transparency Board (Recovery Board), and the Office of Management and Budget (OMB), have initiatives under way to improve the accuracy and availability of federal spending data. The GAT Board, through its working groups, developed approaches to standardize key data elements to improve data integrity; link financial management systems with award systems to reconcile spending data with obligations; and leverage existing data to help improve oversight. With no dedicated funding, GAT Board plans are incremental and leverage ongoing agency initiatives designed to improve existing business processes as well as improve data transparency. These initiatives are in an early stage, and some progress has been made to bring greater consistency to award identifiers. The GAT Board's mandate is to provide strategic direction, not to implement changes. Further, while these early plans are being developed with input from a range of federal stakeholders, the GAT Board and OMB have not developed mechanisms for obtaining input from non-federal fund recipients. Lessons from implementing the transparency objectives of the Recovery Act could help inform these new initiatives: Standardize data to integrate systems and enhance accountability . Similar to the GAT Board's current focus on standardization, the Recovery Board recognized that standardized data would be more usable by the public and the Recovery Board for identifying potential misuse of federal funds. However, reporting requirements under the Recovery Act had to be met quickly. Because agencies did not collect spending data in a consistent manner, the most expedient approach was to collect data from fund recipients, even though similar data already existed in agency systems. Given the longer timeframes to develop current transparency initiatives, OMB and the GAT Board are working toward greater data consistency by focusing on data standards. Their plans, however, do not include long-term steps, such as working toward uniform award identifiers that would improve award tracking with less burden on recipients. Obtain stakeholder involvement as reporting requirements are developed . During the Recovery Act, federal officials listened to the concerns of recipients and made changes to guidance in response, which helped ensure they could meet those requirements. Without similar outreach under current initiatives, reporting challenges may not be addressed, potentially impairing the data's accuracy and completeness, and increasing burden on those reporting. Delineate clear requirements and lines of authority for implementing transparency initiatives . Unlike the present efforts to expand spending transparency, the Recovery Act provided OMB and the Recovery Board with clear authority and mandated reporting requirements. Given this clarity, transparency provisions were carried out successfully and on time. Going forward, without clear, legislated authority and requirements, the ability to sustain progress and institutionalize transparency initiatives may be jeopardized as priorities shift over time. GAO recommends that the Director of OMB, with the GAT Board, develop a long-term plan to implement comprehensive transparency reform, and increase efforts for obtaining stakeholder input to ensure reporting challenges are addressed. Further, Congress should consider legislating transparency requirements and establishing clear authority to implement these requirements to ensure that recommended approaches for improving transparency are carried out across the federal government. The GAT Board, OMB and other cognizant agencies generally concurred with GAO's recommendations and provided further information, which was incorporated into the report as appropriate.
To be eligible for DI or SSI benefits, an individual generally must have a medically determined physical or mental impairment that (1) has lasted or is expected to last at least 1 year or result in death and (2) prevents the individual from engaging in substantial gainful activity (SGA). Once an individual is receiving benefits, continuing disability reviews (CDR) are periodically conducted by SSA to evaluate if the individual has medically improved to the point of being able to work and is no longer eligible for benefits. Although the DI and SSI programs use the same definition of disability for eligibility purposes, they were designed to serve different populations. DI provides benefits to workers with disabilities who have a qualifying work history; in contrast, SSI provides cash support for people with low income, few resources, and little or no workforce attachment. The DI and SSI programs also differ in how work earnings affect benefits. DI beneficiaries are allowed a 9-month trial work period during which their benefits continue regardless of how much they earn. Upon completion of the 9-month trial work period, DI beneficiaries move into a 36-month re-entitlement period (extended period of eligibility) in which their monthly cash benefit ceases except in months in which earnings are less than SGA. Recipients whose earnings are above SGA after they complete the 36-month period should, under program rules, stop receiving benefits and be removed from the disability rolls. In contrast, SSI benefits are reduced by $1 for every $2 of earned income exceeding $65 per month until benefits reach zero. If SSI beneficiaries receive no benefits for 12 consecutive months due to earned income, they are removed from the disability rolls. Congress established the Ticket to Work program in 1999 to assist DI and SSI beneficiaries in obtaining and retaining employment, and potentially bring about significant savings to the Disability Insurance Trust Fund by reducing or eliminating their benefits. This voluntary program was also designed to provide beneficiaries with greater choice in public and private providers of employment services, such as job preparation and placement and vocational rehabilitation services. Prior to the establishment of the Ticket program, DI and SSI beneficiaries who needed help returning to work generally had to seek services from VRs. When an individual becomes eligible for DI or SSI benefits, SSA mails a ticket designating the beneficiary as a ticket holder (see app. II for a picture of a ticket). Generally, DI and SSI beneficiaries from 18 to 64 years old are eligible ticket holders. They may choose whether or not to use their tickets, and with which service providers. Likewise, SSA-approved ENs, which are contracted by SSA for 5 years with the option to extend, can decide whether or not to serve an individual ticket holder. ENs can advertise their services in the program’s online directory used by ticket holders to find ENs in their area. Ticket holders who assign their tickets and demonstrate “timely progress” toward self-supporting employment, such as by fulfilling minimum earnings or education requirements, are exempted from medical CDRs. This provision provides an incentive for individuals to assign their tickets who otherwise might not attempt to work out of fear that a medical CDR would cause them to lose benefit eligibility. The ticket holder’s ticket becomes “assigned” once the ticket holder and EN decide to work together and submit an individual work plan describing the services the EN will provide. A ticket holder can unassign the ticket from the EN at any time, sometimes switching to a different EN. When the ticket holder has sufficient earnings, the EN becomes eligible for payments from SSA (see fig. 1). The EN can choose from two payment options: (1) milestone-outcome payments that begin when the ticket holder has a specified level of earnings and continue for a specified time after the ticket holder no longer receives benefits due to earnings, or (2) outcome-only payments that do not begin until the ticket holder is entirely off benefits. The Ticket law gives SSA authority to help ensure the quality of participating ENs, and requires ENs to meet and maintain compliance with general selection criteria (such as professional and educational qualifications) and specific selection criteria (such as substantial expertise and experience in providing employment services and supports). The law also requires SSA to perform periodic quality assurance reviews of EN service provision, and to develop performance measures for evaluating ENs. ENs are required to annually report on outcomes achieved in providing specific services. The law also requires SSA to terminate EN contracts for inadequate performance. Additionally, the law requires SSA to provide for independent evaluations to assess the Ticket program’s effectiveness, including cost-effectiveness, types of services provided to ticket holders who return to work and those who do not, and employment outcomes for ticket holders. SSA’s Office of Employment Support Programs is responsible for management and oversight of the Ticket program. The office contracts with a private company (Ticket program manager) for day-to-day operations, including front-line communication with ENs, such as technical assistance and training, and processing ticket assignments and EN payment requests. In addition, the program manager recruits ENs; however, SSA’s Office of Employment Support Programs retains responsibility for reviewing and approving applicants. The program manager is also responsible for performing timely progress reviews of ticket holders. SSA also contracts with another private company to facilitate beneficiary participation in the program. Finally, SSA contracts with a private research firm for ongoing evaluations of the program. Due to low participation rates by both ticket holders and ENs—in 2005, we reported less than 1 percent of 9.5 million ticket holders had assigned their tickets to an EN or VR and 386 of 1,164 contracted ENs were accepting tickets—SSA revised the Ticket program regulations in 2008 (see table 1). The changes lowered the ticket holder earnings threshold which triggers payments to ENs. Previously, ENs were not eligible for SSA payment until a ticket holder had earnings at the SGA level or above. Among other key changes, the revised regulations added a first phase of four $1275 payments over a ticket holder’s first 9 months working at the trial work level, which, in many cases, equates to part-time work. The EN is also eligible for a second phase of smaller monthly payments when a ticket holder has earnings above the SGA level, and a third and final phase of payments (the outcome phase) once a ticket holder is earning above SGA and no longer receives disability benefits (see app. III for details of the payment structure under the revised regulations). Finally, an EN can now serve a ticket holder formerly served by a VR. The cost and viability of the Ticket program has been scrutinized by researchers and policymakers since the program’s inception. At that time, it was estimated that if an additional one-half of 1 percent of disability beneficiaries went back to work, and ceased benefits, the savings to the Social Security Trust Funds and Treasury would total $3.5 billion over their working lives. The Congressional Budget Office (CBO) also projected the Ticket program would lead to savings. However, in 2008, SSA’s Office of the Inspector General (IG) found the percentage of beneficiaries who cease benefits as a result of employment had remained unchanged from before implementation and projected cost savings had not materialized. The IG also found the percentage of beneficiaries who had earnings after receiving services steadily decreased over time, and recommended that SSA evaluate the program’s continued viability. As part of its contract with SSA for program evaluations, Mathematica Policy Research, Inc. has preliminary findings indicating the Ticket program was not self-financing as of January 2010 and its impact on participants’ employment, earnings, or benefits was not large enough to offset the program’s operating costs. In 2008, SSA’s Office of the Chief Actuary estimated short-term effects of the regulatory changes, projecting substantial up-front costs due to increases in the frequency and amount of payments to ENs and benefit payments to beneficiaries exempted from CDRs. The estimates noted that while these higher costs could be partially offset by later increases in successful work attempts, resulting in reduced or eliminated benefit payments, there would still be a net increase in costs. The number of eligible ticket holders assigning their tickets to ENs increased from about 22,000 in fiscal year 2007, prior to the 2008 changes in regulations, to more than 49,000 as of July 2010. Despite the increase in numbers, those assigning their tickets to ENs still only represented two- fifths of 1 percent of the approximately 12.1 million eligible ticket holders as of July 2010, compared to one-fifth of 1 percent in fiscal year 2007 before the regulatory changes. SSA’s outreach contractor told us that while they are beginning to place more emphasis on increasing ticket holder participation, their earlier recruitment efforts prioritized increasing the supply of ENs. According to EN representatives, ticket holder participation remains low due, in part, to a lack of understanding and awareness of the program. Some disability rights advocates and EN representatives said a fear of losing benefits may also deter eligible ticket holders from participating in the program, especially DI beneficiaries who, after the 9-month trial work period, face an immediate cessation of benefits in a given month when earnings exceed SGA. Some disability rights advocates and EN representatives also said many ticket holders may not know how going back to work affects their benefits, making it difficult for them to agree to participate. Sixteen of the 25 EN representatives we interviewed also told us their ENs screen ticket holders, and 12 said at least half of them do not meet their screening criteria. For example, one EN representative told us that certain ticket holders are often screened out because they lack the education, work experience, and transportation needed to obtain employment. In addition, according to some disability rights advocates and EN representatives, some ticket holders may be discouraged from participating by previous negative experiences with ENs. For example, one EN representative said ticket holders who assigned their tickets to ENs that provide inadequate support may become frustrated and leave the program altogether. Although the number of ticket holders assigning their tickets has increased since the 2008 changes, whether the changes have impacted the number of those returning to work and exiting the benefit rolls is unknown. The law requires SSA to conduct ongoing independent evaluations of ticket holders’ employment outcomes. Although SSA has tentative plans to study exits from the benefit rolls since the program regulations took effect in 2008, the decision to undertake this study depends upon the results of other planned research. According to researchers, some additional time may be needed before a full assessment can be made. Preliminary research conducted for SSA by Mathematica estimated that approximately 10 percent of beneficiaries who assigned their tickets in 2006 will leave the rolls for at least 1 month; however, as researchers have noted, this does not equate to long-term exits from the rolls. Researchers have reported many beneficiaries return to work but do not earn enough to leave the rolls, due in part to functional limitations and subsequent declines in health. Whether or not ticket holders are able to leave the rolls has implications for the program’s cost-effectiveness and ultimately, its long-term viability. In preliminary research examining the program prior to the 2008 regulatory changes, Mathematica found more exits from the rolls would be needed to offset existing operational costs. Yet without data on the number of ticket holders actually exiting the rolls due to long-term employment, an accurate assessment of the program cannot be made. Although an increasing number of ENs are participating in the Ticket program since the 2008 changes in regulations, many ENs are not actively participating and ticket payments have remained concentrated with 20 ENs. The number of ENs contracted by SSA increased from 1,514 in fiscal year 2007 to 1,603 as of July 2010. During this time, ENs accepting at least one ticket also increased from 752 to 1,086. The majority of EN representatives we interviewed said the regulatory changes provided greater incentive for participation because ENs can now receive payments earlier and be paid for ticket holders with part-time earnings. Twenty-three of the 25 ENs we interviewed opted to receive payments under the milestone-outcome option, which does not require that ticket holders have sufficient earnings to leave the benefit rolls before ENs are eligible for payments. One EN representative said that because SSA payments for serving DI and SSI beneficiaries are now roughly equal, an EN has greater incentive to serve SSI beneficiaries, who were previously associated with lower payment amounts. Additionally, ENs receiving ticket payments from SSA more than doubled, from 206 in fiscal year 2007 to 460 as of July 2010, and total payments grew substantially, from $3.8 million in fiscal year 2007 to $13 million as of July 2010. According to SSA officials and program manager representatives, the program’s goal is not more ENs, but more ENs accepting tickets, serving ticket holders, and generating payments. To this end, SSA officials reported that SSA has sought to identify ENs who are still not accepting tickets to encourage them to participate or terminate their contracts. For example, the program manager identifies ENs not receiving payments within a certain amount of time and encourages them to participate. While the number of ENs accepting tickets has increased, a relative few receive the bulk of all ticket payments: 20 ENs accounted for the majority of all ticket payments in every fiscal year since the program was fully implemented in 2004, but represented a small percentage of ENs with tickets assigned (see fig. 2). In fiscal year 2009, 20 ENs representing 1.2 percent of all SSA-contracted ENs and 1.9 percent of those ENs accepting tickets received 71 percent of total ticket payments. Reasons why EN participation is not broader may be attributable to costs and other factors. Several EN representatives told us that financing the upfront costs of providing services can be challenging, even though SSA officials said the 2008 regulatory changes were intended to address the costs associated with providing initial services. Some EN representatives said that when ENs begin to receive outcome payments for clients they no longer intensively serve, it can help to cover the upfront costs of providing services to new clients. SSA officials noted that a number of ENs have received outcome payments; however, a ticket holder must sustain employment at the SGA level to generate an ongoing stream of outcome payments for the EN. Some EN representatives also said providing resource-intensive services, including driving clients to work or providing career and personal counseling, could limit profitability. They also reported that ticket payments are insufficient to support such costly services, if they are an EN’s sole source of funding. Several EN representatives also told us an EN’s ability to generate ticket payments depends on effectively screening potential clients for motivation and employability. ENs receiving among the largest payment amounts from SSA provide a range of services, including assistance with job search and retention. But since the 2008 changes in regulations, an increasing number used service approaches targeting ticket holders who are already working or ready to work, and they accounted for a greater share of payments from SSA. However, SSA does not compile data on service provision trends and, therefore, cannot use data on evolving service approaches to inform its management and oversight of the program, or to tailor guidance to ENs. The ENs receiving among the largest payment amounts from SSA in fiscal years 2007 and 2009 (the time period just before and after the 2008 changes in regulations) provide a range of services to ticket holders, including job search and retention assistance and financial support (see app. IV). Disability rights advocates, EN representatives, and SSA officials we spoke with stressed the importance of a variety of available services because needs of ticket holders vary. For example, a ticket holder returning to the workforce after a short absence may need minimal job search assistance; another with a severe disability may need ongoing support at the workplace to perform job tasks. EN representatives said the most commonly provided services are developing ticket holders’ job- seeking skills, such as resume writing and interview preparation, and providing job-retention services, such as additional training and guidance on difficult work situations. Compared to the ENs we interviewed, the VRs included in our review reported providing a greater number of services and more costly on-the-job and medical-related supports, such as supported employment, and medical and therapeutic treatment. The VRs, which receive federal and state operating funds, more frequently reported providing funding for ticket holders’ education or vocational training, assistive technology, or personal attendant services. Some disability rights advocates and EN representatives told us the VR service approach may be a good fit for those needing intensive services or training, but not for ticket holders looking for quick job placement assistance or who need long-term job retention services. Under requirements specific to VRs, they may close cases after ticket holders are employed for 90 days. The 25 ENs we interviewed also varied in areas served and methods of delivery. Seven served local ticket holders, 12 served ticket holders in one or multiple states, and the other 6 served ticket holders nationwide. In general, ENs serving ticket holders locally or statewide primarily offered services in person, while those serving ticket holders in multiple states or nationwide primarily used the phone or Internet for services (see app. V). Some ENs offering services in person told us some ticket holders prefer face-to-face interaction, and the ENs also are better able to assess ticket holders’ needs and commitment to working. For example, one EN representative conducts 90-minute intake interviews with all potential clients, asking about their disabilities, interests, and needs, and describing how working will affect their benefits, and may meet with the ticket holder’s relatives. On the other hand, some ENs offering services by phone or online said they can expand their geographical reach, serve more ticket holders, and expend fewer resources. Some disability advocates and EN representatives said some ticket holders, for example, those located in rural areas or whose mobility is limited by their disability, prefer to interact by phone or online. Although ENs continue to provide a range of services, we found an increasing number of ENs used service approaches targeting ticket holders who are already working or ready to work, and ENs using these approaches have accounted for a greater share of payments. The 2008 regulatory changes more explicitly allow ENs to pay ticket holders and we found increasing numbers of ENs sharing SSA ticket payments with ticket holders who have sufficient earnings to qualify the EN for payment. This “shared payment” approach allows the EN to readily claim ticket payments while providing no direct services because the ticket holder is already working or able to find a job without assistance. These service approaches accounted for an increasing proportion of total ticket payments made by SSA. For example, in fiscal year 2007, 1 of the 20 ENs among those with the largest payment amounts used this approach and received about $787,000 in SSA payments, or one-fifth of all payments to ENs. In fiscal year 2009, 3 of the 20 ENs among those with the largest payment amounts used this approach and received over $4 million, or nearly one-third of payments to all ENs. Two of the 3 ENs pass back 75 percent of SSA’s ticket payment to ticket holders, equating to about $950 per payment for some ticket holders, and retain 25 percent themselves; and the third EN offers ticket holders $500 every 3 months. SSA officials told us the decision to allow ENs to share payments with ticket holders was made in 2001, prior to the program’s implementation and by officials who have since left the agency. Yet in its 2008 changes, SSA for the first time provided regulatory language that clearly permits the use of shared payments. Some disability rights advocates and EN representatives said that since program rules do not allow ticket holders to serve as their own ENs, this approach allows them to receive a Ticket program payment for their efforts to find a job on their own. Some EN representatives also said the payment may help a ticket holder meet needed work-related expenses such as transportation, clothing, and child care, increasing the likelihood he or she will keep a job. However, the ENs sharing payments with ticket holders told us they do not restrict or verify how the money is used. Two of the ENs require ticket holders to sign a form affirming their intent to use the payments for work-related expenses and the third simply provides payments. While the data indicate a large number of ticket holders assigned to shared-payment ENs have earnings sufficient to generate SSA payments, this is expected given these ENs target ticket holders who are already working. Long-term outcomes of ticket holders receiving shared payments compared to those receiving support services is unknown, because SSA does not assess the relative outcomes of ticket holders based on services received. A senior SSA official acknowledged that the program must balance the demands it places on ENs to provide services with incentives for them to participate, and Congress’ intent was to provide ticket holders with a choice of services. However, the official also told us SSA officials have some concerns about the shared payment approach because the program was not intended to provide a wage subsidy, nor assist those who can find employment on their own, but to provide tangible employment- related services to those who can benefit from them most. Along these lines, near the end of our review, the official said SSA is considering requiring ENs to provide a minimum level of services and to periodically assess ticket holders’ need for additional services. Some disability rights advocates and EN representatives raised concerns about sharing payments while providing only limited or no other services. This approach, they said, only works for ticket holders who can find employment on their own, and raises questions about the value these ENs add to the program. For example, one disability rights advocate said that it would be preferable for SSA to give the ticket holder the entire payment directly, rather than paying an EN a portion of the ticket payment to serve as a middleman. Additionally, the representatives told us ticket holders may need support after finding employment, such as counseling or help with a disability-related relapse, but choose an EN using the shared- payment approach because they are enticed by the financial incentive and do not anticipate future difficulties. In fact, at the time of our review, one EN’s Web site explicitly encouraged ticket holders who need help finding a job to contact their VR first, then return to the EN for shared payments only when employed (see fig. 3). Because ENs using this approach reported they tend to interact with ticket holders by phone or online, ticket holders may find it difficult to get answers to questions. During our review, we made phone calls to 6 ENs that offer shared payments and frequently reached a recorded message. We were able to speak directly with a representative for only 2 of the 6 ENs, and in one case, all extensions for the EN’s toll-free number were out of service. Further, some disability rights advocates expressed concerns that ticket holders who decide they need additional support will have difficulty switching to another EN: Some of the ticket’s value has been used, and fewer potential payments may make the ticket holder a less desirable client for a prospective EN. Further, according to one EN representative, because these ENs do not provide a vocational assessment of strengths, weaknesses, and aptitude, ticket holders may end up in a job that is a poor fit, affecting their ability to retain it and, ultimately, reduce dependency on benefits. In addition to the shared payment approach which targets ticket holders already working, two “employer-driven” service approaches which target ticket holders who are ready to work have also accounted for a greater share of SSA payments to ENs among those with the largest payment amounts: the direct employment approach, in which the EN itself employs ticket holders, and the staffing approach, in which the EN primarily works with employers to develop and identify jobs for ticket holders, similar to a staffing agency. While 4 of the 20 ENs among those with the largest payments in fiscal year 2007 used employer-driven approaches, 6 did so in fiscal year 2009. Payments to these ENs in fiscal year 2007 were about $226,000 or 6 percent of all SSA payments; in fiscal year 2009, payments increased to about $1.7 million, or 13.4 percent (see fig. 4). A representative of one EN using an employer-driven approach told us the EN plans to pay financial incentives to employers to hire ticket holders. One key program official told us SSA does not restrict how ticket payments are spent, and its handbook for ENs includes an example of an EN providing employers with financial incentives. Both approaches generally target ticket holders who are ready to work, facilitating earlier SSA payments to the EN. For example, one EN looks for ticket holders with a high school education, computer skills, and relevant work experience, and screens out ticket holders with psychiatric or cognitive impairments. SSA officials told us they expect ENs to accept ticket assignments of ticket holders who are job ready, as well as individuals they believe they can assist in becoming job ready. They said those who are job ready may have the best chance of becoming financially independent and leaving the benefit rolls. Some disability rights advocates and EN representatives said the direct employment approach can provide on-the-job supports for ticket holders, and the staffing approach could increase the likelihood of a quick job match by responding to the needs of employers. However, they also raised some concerns about these approaches. For example, some disability rights advocates and EN representatives said there is the potential for disclosure of a ticket holder’s disability to an employer, although some may be uncomfortable having this private information shared for fear of being treated differently by supervisors or coworkers. Some EN representatives raised a concern that once payments from SSA to the EN, or from the EN to the employer cease, ticket holders could lose their jobs because the financial incentive is gone. Some disability rights advocates and EN representatives also raised concerns that under the staffing approach, ENs may focus primarily on an employer’s needs and steer ticket holders into jobs that are not a good match, decreasing the likelihood of job retention. EN we interviewed using the hared payment approachEN we interviewed using employer-driven approacheEN we interviewed thuse other pproche EN we did not interview (pproche nknown) SSA officials said they do not compile data on trends in service provision, nor view it as SSA’s role. As a result, information on service provision is limited. For example, although SSA compiles information on certain types of service providers, such as mental health providers, as part of its efforts to recruit specific providers, it does not obtain comprehensive information on services provided by all ENs. Moreover, while service providers applying to become ENs must indicate which services they intend to provide using a checklist in the request for proposal, and approved ENs must update this information on the annual periodic outcome report to SSA, the checklist does not reflect all services, such as providing financial assistance or incentives to ticket holders via shared payments. One key program official acknowledged that some ENs note they offer all services listed in the request for proposal (RFP) and annual periodic outcome report while rarely or never providing some of them. Further, it is unclear that SSA uses information it collects on service provision. For example, while SSA officials told us the agency first approved an EN with a shared payment approach because the EN pledged to offer job search assistance, personal attendant support, and other services, we found this EN does not provide such services and had not reported providing them in its last three annual periodic outcome reports to SSA. During the course of our review, SSA officials said they plan to begin restricting the services an EN can advertise in the online service directory to services the EN has agreed to provide ticket holders in individual work plans. This is intended to ensure the directory of ENs and services more accurately reflects actual services delivered. Without sufficient data on trends in service provision, SSA lacks information to inform its management and oversight of the program, or to tailor guidance to ENs. Internal control standards state that program managers need operational data to determine whether they are meeting their goals for accountability. SSA has identified problems with certain service approaches on an ad hoc basis, and responded with changes in program requirements and procedures. In 2009, SSA provided further clarifications regarding its 2008 regulatory requirements for phase 1 milestone 1 payments (payments made by SSA to an EN when a ticket holder has 1 month of trial work-level earnings) and established a review process after an investigation following a beneficiary complaint found that some ENs employed ticket holders themselves just long enough to qualify for this payment, according to SSA officials. Although SSA was responsive and has since implemented additional oversight mechanisms, the problem was identified by a third-party complaint and not through systematic oversight on the agency’s part. In addition, because sufficient data on the extent to which shared payment, employer-driven, or other service approaches is not collected by SSA, we could not determine approaches of ENs we did not review, despite the fact these ENs received nearly $3.7 million, or nearly 30 percent of all payments from SSA in fiscal year 2009. Finally, without sufficient information on service provision trends, SSA is unable to provide guidance or best practices to ENs. For instance, although some disability rights advocates and EN representatives raised concerns that employer-driven approaches may pose conflicts of interest if safeguards are not implemented, the EN contract does not include guidance to ENs on how to avoid such issues. Since 2005, SSA has not consistently monitored or enforced the timely progress of ticket holders who assign their tickets to ENs and VRs in order to assess whether they should continue to be exempt from medical continuing disability reviews (CDR)—a key tool for assessing continuing eligibility for benefits. While timely progress by ticket holders is a regulatory requirement, SSA instituted a moratorium on enforcing progress review results—a responsibility of the Ticket program manager—because of concerns expressed by service providers that the work requirements for ticket holders were too stringent. SSA also considered changes that would have eliminated timely progress reviews. However, the final 2008 regulatory changes established more stringent timely progress standards, such as minimum requirements for ticket holders to meet within the first 2 years of ticket assignment, but added provisions allowing for education or job training in lieu of employment (see app. VI). SSA has acknowledged in the preamble to its program regulations and in a 2005 internal memo the importance of timely progress reviews for ensuring that ticket holders who have medically improved and no longer meet SSA’s disability requirements do not receive benefits and its disability programs do not incur unwarranted costs. Further, without timely progress reviews, representatives of some of the ENs we interviewed said some ticket holders “park” their tickets to get the CDR exemption, for example, by assigning their ticket with no interest in obtaining EN services or reducing their dependence on benefits. Resuming timely progress reviews, they said, would be a positive motivator for ticket holders to engage in EN services essential to obtaining and retaining employment and, ultimately, reducing dependence on benefits. During the course of our review, in November 2010, representatives of the Ticket program manager reported they began limited resumption of the timely progress reviews. Representatives of the program manager reported, between November 19 and December 15, 2010, they mailed out requests for information on timely progress (the first step in the review process) to roughly 4,900—or 26 percent—of the 19,000 ticket holders initially reported as due for review in November of that year. After reviewing a draft of our report, SSA officials told us that by February 8, 2011, initial requests for information had been mailed to those ticket holders–almost 3 months after they began their mailings. Given that SSA estimates between 13,000 to 22,000 ticket holders will be due for timely progress reviews each month of the first year of resumption, there is potential for a significant backlog in reviews to determine which ticket holders should continue to qualify for CDR exemption. To reduce the workload, SSA and the program manager reported taking steps to develop an automated earnings check to better identify ticket holders who met timely progress based on their earnings, and eliminate the need to contact them for a review. The agency also delayed resumption of timely progress reviews to ensure this automated earnings check was operational, according to one SSA official. However, as of December 15, 2010, program manager representatives reported it still was not operational and began resumption of timely progress reviews without this check in place. Once in place, one SSA official anticipated these automated earnings checks would reduce the volume of mailings and follow-up action needed to complete timely progress reviews. However, representatives of the program manager said such checks would have little impact on the number of pending reviews: When operational, they estimated, the checks would likely identify only a few hundred ticket holders as meeting timely progress out of the 13,000 to 22,000 due for reviews each month. SSA officials said that significant experience with the earnings check will be needed to determine its ultimate impact on the workload. After reviewing a draft of our report, they said the primary reason for conducting the earnings check is to avoid placing unnecessary burden on ticket holders and ENs, and any reduction in workload would be an additional benefit. In addition to delays in monitoring timely progress, there are questions about whether the program manager will have reliable information to make timely progress determinations. At the time of our review, SSA and program manager representatives told us they will rely on ticket holder and EN self-reported information. For example, the progress review form the program manager sends to ticket holders asks them to reply with a yes or no answer as to whether they met the earnings requirement or the education or training requirement, and asks for the name of the school and number of credits completed. SSA and program manager representatives told us they do not independently verify this self-reported information with employment records or educational documentation. In our past work, we have found that reliance on self-reported information alone can lead to program integrity issues, such as overpayments of SSA benefits. Absent some level of independent verification of the information ticket holders provide, it is unclear to what extent the results of the timely progress reviews are based on accurate information. SSA has not developed performance measures for contracted ENs to assess their success in helping assigned ticket holders obtain and retain employment and reduce dependence on disability benefits. The Ticket law directs SSA to develop performance measures for quality assurance in the provision of services by ENs, and gives SSA the authority to terminate EN contracts for inadequate performance. In addition, internal control standards for the federal government also stress the use of performance measures for proper stewardship of and accountability for government resources, and for achieving effective and efficient program results. SSA officials told us the historically low number of contracted ENs, and even fewer that accept tickets, made it difficult to hold ENs to performance standards. Since the increases in the number of ENs after the 2008 changes, officials said they may consider factoring performance into EN contract extension reviews in the future. Near the conclusion of our audit work, they told us they are considering future updates to the program regulations that in their view will address EN performance expectations. However, without performance measures, SSA is currently unable to systematically evaluate EN performance, and ultimately determine whether ENs should be allowed to remain in the program. Lack of performance measures may mean ENs are unclear about program goals and send mixed messages to ticket holders about expected outcomes. Of the 25 ENs we interviewed, representatives of 15 said SSA had not adequately articulated performance expectations for serving ticket holders. SSA officials told us EN quality assurance is built into the Ticket program’s payment system because ENs cannot get paid until a ticket holder meets minimum earnings thresholds. However, the 2008 regulatory changes lowered the earnings thresholds required for ENs to be eligible for ticket payments, making it possible for ENs to be paid without a ticket holder first achieving earnings at or above the SGA level. An EN with the fourth-largest payment amount from SSA in fiscal year 2009 stated in its last three annual periodic outcome reports that 100 percent of its ticket holders placed in jobs had earnings of less than $10,000 per year— equating to less than the SGA level, if earnings were accrued regularly over the course of 12 months. In fact, the EN’s phone message states that DI ticket holders can work part time indefinitely without reducing SSA benefits, and the Web site says most of its positions are designed so ticket holders stay below income thresholds for benefit cutoff. Despite the fact that SSA’s EN handbook states the ultimate goal of the program is to reduce dependence and, whenever possible, eliminate reliance on benefits, we found multiple ENs among those with the largest payment amounts communicating through their Web sites, recorded phone messages, or in our discussions with representatives that as long as DI ticket holders’ earnings stay below the SGA level, they can keep full disability benefits (see fig. 5 for excerpts of calls and a link to audio excerpts. App. VII provides full transcripts of the calls). While full-time employment may be unattainable for certain ticket holders and one key program official told us that part-time employment is acceptable under the 2008 regulations, the official said it should be a starting point, not an end goal. Nonetheless, our review indicates some ticket holders are being coached by ENs, including some of those with the largest payment amounts, to work part time so as not to jeopardize their benefits. While SSA lacks performance measures to evaluate ENs, it does collect some self-reported EN performance information. To comply with the Ticket law, SSA requires ENs to submit annual periodic outcome reports, including information on ticket holder job placements, job retention, and disability benefits suspension and termination. SSA officials told us the original purpose of these reports was to evaluate EN performance and, as required by law, to make the reports available to beneficiaries. However, officials said because the information is self reported it is not used to evaluate ENs or shared with beneficiaries. Instead, officials said the outcome reports are primarily used by the Ticket program manager to update EN contact information, such as addresses and phone numbers. At the time of our review, SSA was developing a report card with performance information on each EN with 10 or more assigned tickets. The report card is based on selected information from the annual periodic outcome reports, as well as from a newly developed ticket holder customer satisfaction survey, and is currently being piloted in California. SSA officials said the primary purpose of the report cards will be to share performance information with ticket holders, as required by law, to help them make informed decisions when selecting an EN. SSA officials also said they were beginning to solicit feedback from ENs on how the report card might be used by the agency to evaluate EN performance, but were unable to provide us with documentation on plans to use the report card as a performance management tool. Further, because the report cards are designed to be used by ticket holders, it is not clear they will include the full extent of outcome-oriented performance information needed to evaluate ENs against the program purpose, particularly in deciding whether to extend an EN contract. For example, the report card does not have any indicators for an EN’s success in moving ticket holders off benefits. While it includes an indicator for ticket holders who retain a job for at least 6 months, it does not include earnings information, which is key to reducing and eventually ending SSA disability benefits. SSA’s process for approving ENs to serve ticket holders lacks systematic tools to ensure quality, such as requiring all applicants to submit a comprehensive business plan for how their services will help ticket holders obtain and retain employment and reduce dependency on benefits, and providing clear and specific written criteria to SSA staff who review qualifications of applicants. SSA’s RFP states an EN applicant must provide applicable certificates, licenses, or other credentials for delivering employment services, VR services, or other support services. An EN is only required to submit a qualifications statement and business plan that demonstrates expertise and/or experience at providing employment services if it does not submit specified documents (see table 2). SSA officials told us when the program was implemented almost all applicants were approved because the agency wanted to increase participation. As of June 2010, only 11 ENs had ever been denied an EN contract, 6 of those in fiscal year 2010. However, SSA officials told us that, in recent years, they have become more stringent in reviewing qualifications; and, in May 2009, modified the RFP to require more detailed information from applicants who submit a business plan. Near the conclusion of our review, the officials told us they were considering changes to the RFP requiring all ENs to submit a business plan that describes how the applicant’s services will help the ticket holder achieve sustained employment. The officials also said they were considering requiring ENs to demonstrate more specific experience serving individuals with disabilities. However, these changes were still pending at the time of our review. SSA has not consistently required ENs directly hiring ticket holders to submit a comprehensive business plan—a safeguard that could screen out ENs with insufficient qualifications or questionable business practices. In May 2009, as a result of questionable activities by some ENs which temporarily hired ticket holders primarily to obtain early ticket payments, SSA revised its RFP to require applicants intending to hire ticket holders directly to provide additional information on the nature of this employment in their business plans. Our case file review showed that SSA subsequently denied one EN applicant in April 2010 because it had not provided “a clearly elucidative business plan for assisting beneficiaries in finding and retaining employment with a goal toward self-sufficiency.” Yet of 9 RFP submissions by ENs approved by SSA in March and April 2010 which indicated in the RFP that they would directly employ ticket holders, 7 were not required to provide a business plan because they provided one of the other allowable proof of qualifications— documentation of certificates, licenses, or other credentials. As a result, SSA lacked information to assess whether the nature and extent of the proposed direct employment were consistent with the program’s purpose. In addition, SSA does not have clear and specific criteria to clarify the RFP requirements and help staff responsible for reviewing EN applications assess whether an applicant’s documentation of qualifications is adequate. While the RFP requires an applicant, if submitting a business plan, to clearly demonstrate expertise and/or experience in providing employment services and/or supports relevant to the requirements of the RFP, there is no explicit requirement for all EN applicants to demonstrate experience working with people with disabilities or in providing the specific services listed in its application. SSA staff told us they use the criteria from the RFP, their judgment, and their knowledge of the Ticket law to assess qualifications. One SSA official said because a team of only three people is responsible for reviewing EN applications, they learn on the job. If they have questions, they ask other staff or their supervisor. However, without clear and specific criteria, we found staff did not always hold applicants to the same standards. For example, while one employee reported reviewing EN qualifications against the EN’s proposed services in the submitted RFP, the 38 applicant case files we reviewed for EN applicants approved and denied in fiscal years 2009 and 2010 indicated staff do not consistently link EN qualifications to promised services. We found 5 applicants who were denied explicitly because they could not demonstrate experience or expertise working with people with disabilities or in providing specific services, such as work incentives counseling, self-employment assistance, and supported employment. In contrast, 14 others who also did not demonstrate such experience or expertise were approved, according to the files. In one instance, an applicant approved by SSA in August 2009 indicated in its RFP submission it planned to provide career consulting, job placement, supported employment, as well as various other services, but submitted a beauty institute license as its only proof of qualifications to provide such services. SSA has achieved modest improvements in Ticket program participation for ticket holders and ENs under the revised regulations finalized in 2008, and we are encouraged that in recognition of program weaknesses, the agency is considering various improvements. However, at this time, the agency still lacks critical management and oversight mechanisms to assess whether the program is achieving its original purpose, and ultimately, whether the program is viable. SSA is considering studying ticket holders’ exits from the rolls following the implementation of the 2008 regulations; however, it is unclear whether the agency will follow through with this effort. It also has not collected adequate information on service provision that could help the agency and policymakers analyze program trends, including the increasing prevalence of sharing SSA ticket payments with ticket holders. In this regard, SSA is not well positioned to assess the long- term success of the program or whether service approaches, such as sharing payments with ticket holders, are consistent with program goals. Moreover, without regular reviews of ticket holders’ timely progress toward reducing dependence on benefits, they may remain exempt from CDRs, regardless of whether they are in fact moving toward self- supporting employment. Even with resumption of these reviews, SSA may be unable to keep pace with the volume of reviews and their reliance on self-reported information raises questions about accuracy. Inadequate monitoring of ticket holders’ progress raises program integrity concerns and could result in benefit payments to beneficiaries who may no longer be eligible. Further, absent assurance of EN quality and sustained oversight of EN performance, ticket holders could encounter ENs providing services or information that are inconsistent with the program’s purpose of reducing or eliminating dependence on benefits. Ultimately, SSA must balance its efforts to increase participation in the program with a commitment to outcome-oriented results that emphasize reducing beneficiaries’ dependence on benefits. Without improvements to existing management tools and oversight procedures in the Ticket program, SSA will not be able to provide reasonable assurance that, in a time of increasing fiscal challenges, limited tax dollars are being effectively used to achieve these important program objectives. To inform assessments of the program’s cost and effectiveness and enhance SSA’s oversight and monitoring of ENs and ticket holders, we recommend that the Commissioner of Social Security take the following four actions: prioritize and carry through with a study of participating ticket holders’ exits from the rolls since revisions to the program’s regulations took effect in 2008; adopt a strategy for compiling and using data on trends in employment network service provision to determine whether service approaches, such as sharing SSA ticket payments with ticket holders, are consistent with program goals of helping ticket holders find and retain employment and reduce dependency on benefits; for example, SSA could revise existing tools to compile information on service approaches used by all ENs; develop a strategy to ensure on-time completion of timely progress reviews of ticket holders and take steps to ensure the accuracy of information used to make timely progress determinations; and move forward to develop EN performance measures consistent with the requirements of the Ticket law. We provided a draft of this report to the Social Security Administration. In its written response, reproduced in appendix IX, SSA agreed with three of the five recommendations in our draft report, including a recommendation that the agency develop systematic mechanisms for reviewing the qualifications of prospective ENs. SSA also offered alternative language for the wording of two other recommendations. With regard to our recommendation to prioritize and carry through with a study of participating ticket holders’ exits from the rolls since revisions to the regulations took effect in 2008, SSA stated that the agency already has plans to study the effects of the revisions on the Ticket program. However, as we discuss in the report, SSA’s tentative plans to study exits from the rolls, in particular, have not yet been undertaken and depend upon the results of other planned research. We are encouraged that SSA intends to conduct this research. However, we continue to believe that prioritizing and carrying through with a study of ticket holders’ exits from the rolls is important and that, without such information, an accurate and complete assessment of the program’s effectiveness cannot be made. With regard to our recommendation that SSA develop a strategy to ensure on-time completion of timely progress reviews of ticket holders and take steps to ensure the accuracy of information used to make timely progress determinations, SSA stated that it has a strategy in place, noting that it restarted the timely progress reviews in November 2010. As we discuss in the report, SSA began resumption of timely progress reviews for ticket holders due for review in November 2010. However, according to SSA, it did not carry out the initial step in the review process for these ticket holders until February 2011. Moreover, SSA estimates between 13,000 to 22,000 ticket holders will be due for timely progress reviews each month of the first year of resumption. Given SSA’s current rate of processing the reviews and the volume of additional reviews which are imminent, we continue to believe there is potential for significant backlog in completing these reviews. SSA also stated that the agency will review a random sample of beneficiaries’ cases to ensure the accuracy and reliability of information they compile when making timely progress review decisions. We welcome SSA’s review of beneficiaries’ cases, but continue to be concerned that SSA may not have reliable information on the front end to make timely progress determinations. Given that timely progress reviews are intended to be used as a key program integrity tool—to ensure appropriate exemptions from continuing disability reviews—we continue to believe that SSA needs a strategic approach to ensure the promptness and accuracy of timely progress determinations. SSA agreed with the recommendation we made in our draft report that the agency develop systematic mechanisms for reviewing the qualifications of prospective ENs. After reviewing and providing comments on our draft report, the agency posted a new Request for Quotation on April 27, 2011. This new Request for Quotation, which replaces all previous RFPs, requires each EN to submit a comprehensive business plan and includes more specific criteria for assessing EN qualifications. We believe that this satisfies the intent of the recommendation we made to the agency and should, if properly implemented, improve EN oversight; thus, we have removed the recommendation from our final report. SSA also provided technical comments, which we incorporated into the report where appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the Commissioner of Social Security, appropriate congressional committees, and other interested parties. In addition, this report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7215 or bertonid@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix X. Our review focused on (1) ticket holder and employment network (EN) participation over time, (2) service approaches used by ENs, and (3) the Social Security Administration’s (SSA) policies and processes for evaluating ticket holders and ENs. To answer all of our research objectives, we reviewed relevant federal laws and regulations, and SSA’s Program Operations Manual System for the Ticket program, as well as other written program policies and procedures. We conducted interviews with SSA officials from the Office of Employment Support Programs (OESP), SSA’s contracted Ticket program manager, and SSA’s contracted Ticket program recruitment and outreach manager to learn about their various roles and responsibilities and key management and oversight functions, including approving ENs; reviewing individual work plans, ticket assignments, EN annual periodic outcome reports, and reviewing and processing EN requests for payment; as well as reviewing timely progress of ticket holders. We also learned about the processes and management of the Ticket program manager’s call center for beneficiaries. We conducted interviews and case file reviews for selected ENs and state vocational rehabilitation agencies (VR) that opted for the EN payment system. Overall, the scope of our review was generally limited to ENs, including VRs that opt for the EN payment system, although we examined changes in the number of ticket holders using tickets with VRs paid through the traditional SSA Vocational Rehabilitation Reimbursement Program over time. During our review, we also consulted with outside researchers, disability advocacy organizations, and other stakeholders. Specifically, we interviewed representatives of Mathematica Policy Research; the American Association of People with Disabilities; Consortium for Citizens with Disabilities; Easter Seals Inc.; Goodwill Industries International, Inc.; National Alliance on Mental Illness; National Council on Independent Living; and the World Institute on Disability. For background purposes and to better understand the various roles and functions of entities related to the program, during our design phase, we interviewed representatives of two state Protection and Advocacy programs, a state Work Incentives Planning and Assistance project, and an SSA regional Ticket coordinator. During this phase we also contacted the SSA’s Office of the Inspector General, the Congressional Research Service, the Congressional Budget Office, the National Council on Disability, and Social Security Advisory Board to identify any related work under way in this area. To learn how ticket holder and EN participation in the Ticket to Work program has changed over time, we obtained and analyzed data on eligible ticket holders and ENs approved by SSA from fiscal year 2004, the year in which the Ticket program was fully implemented, through July 2010. Specifically to learn about ticket holder participation, we obtained data from SSA’s Disability Control File and Comprehensive Work Opportunity Support System, for each of these years on the universe of ticket holders, and those who had assigned their tickets to ENs. Specifically to learn about EN participation, we obtained data from the Disability Control File and Comprehensive Work Opportunity Support System and for each of the years mentioned above, on ENs with SSA-approved contracts, assigned tickets, and payments from SSA. For the purposes of analyzing EN participation, we did not examine VRs with which ticket holders use their tickets. To assess the reliability of the data we obtained from SSA, we (1) reviewed existing documentation related to the data, (2) interviewed knowledgeable SSA staff about the data, and (3) tested the data for completeness and accuracy. Our data analyst followed up with SSA staff on an ongoing basis to clarify and resolve potential discrepancies she encountered with the data. Based on these steps, we have found these data to be sufficiently reliable for the purposes of our analysis. We also interviewed SSA officials, disability advocacy organization representatives, and employment network representatives and we reviewed studies on ticket holder participation to learn about factors influencing changes in participation. To learn about service approaches used by ENs, between July and September 2010, we interviewed representatives of 25 ENs, which include 20 ENs among those with the largest payments in fiscal year 2007, the year prior to implementation of the new program regulations, and fiscal year 2009, the most recent year for which we had full data. Based on preliminary data from SSA, we selected the 20 ENs with the largest payments from SSA for our review of services provided by ENs, because we wanted to be able to report on services provided by ENs actually receiving payments from SSA, in effect, to provide a better sense of how government (taxpayer) dollars are being spent. In making this selection, we also determined that the amount of SSA payments received by these ENs made up an extensive share of the total payments SSA provided to all ENs. We subsequently received updated data from SSA, which we confirmed with our own data analysis, and found these ENs accounted for the 20 ENs with the largest payments in fiscal year 2007, the 19 ENs with the largest payments in fiscal year 2009, and the EN receiving the 22nd largest payment in fiscal year 2009. See appendix VIII for the ENs interviewed as part of this review. We conducted site visits to Arizona, California, Connecticut, Maryland, and Massachusetts to visit 10 of these ENs and with representatives of 2 ENs that have no physical locations for delivering services. We selected ENs for our site visits with a range of service approaches. For these interviews, we asked ENs about the services they provided to ticket holders, including the frequency of providing these services, services they most commonly provide, the geographic area they serve, and how their services had changed over time. We also asked them about strengths and weaknesses of different service approaches, and costs and incentives for participating in the Ticket program. In addition to these interviews, we obtained and reviewed documents from SSA for each of the 25 ENs we interviewed for information on services provided by the ENs, as indicated in their request for proposal submissions and their annual periodic outcome reports. We also interviewed representatives of disability advocacy organizations, in addition to the ENs we interviewed, to gain their perspectives on the advantages and disadvantages of various service approaches used by ENs. To determine the distribution of ticket payments to ENs using certain service approaches in fiscal years 2007 and 2009, we categorized ENs based on the primary service approach they used. We also interviewed SSA officials to learn about SSA’s efforts to compile and use information on trends in service provision. We did not assess the effectiveness of the different service approaches we identified being used by ENs in the Ticket program. To analyze the policies and processes SSA has to evaluate employment networks and ticket holders, we compared SSA’s and the SSA-contracted Ticket program manager’s written policies and procedures over key ENs and ticket holder evaluation efforts to the Ticket program laws and regulations, and government internal control standards. We conducted in- depth interviews with OESP and the Ticket program manager staff responsible for these key evaluation efforts, including the approval of ENs, ongoing evaluation of EN performance, and assessment of the timely progress of ticket holders who assign their tickets. To supplement our review of SSA’s efforts to evaluate EN for approval and ongoing performance, we obtained a nongeneralizeable sample of case files of approved, denied, and terminated ENs to review proof of qualifications submitted to SSA and for EN performance information. Specifically, we sampled files for: (1) 20 of the most recently approved ENs as of April 30, 2010, (2) 11 denied EN applicants, which constitute all applicants denied as of June 2010, (3) 17 ENs which had been put on notice by SSA of potential termination—some of which were subsequently terminated, and (4) 25 ENs comprising 20 ENs among those with the largest payment amounts made by SSA in fiscal years 2007 and 2009. Within this sample, in order to assess SSA’s controls over approval determinations, we focused our review on the 38 case files for applicants approved and denied in fiscal years 2009 and 2010. We also interviewed ENs for their perspectives on SSA’s performance expectations and their responsibilities regarding the timely progress of ticket holders. Finally, an investigator from our Forensic Audits and Investigative Service team contacted selected ENs, posing as a fictitious employer or relative of a ticket holder, to test for potential vulnerabilities in program management and oversight. The investigator phoned 16 ENs, including 9 from among the 25 we interviewed and 7 ENs we identified using the online EN service directory, interviews, and e-mail alerts. We judgmentally selected ENs who advertised paying a portion of the ticket payment to ticket holders or providing financial incentives to employers, or whose services were unclear. The investigator called 8 of the 16 to clarify services provided by the ENs. In five of the recordings or calls, the EN representatives discussed how work could affect benefits. In three of these, the EN representatives explicitly told the caller how to remain on benefits indefinitely while working. Although this is not generalizable across all ENs, it illustrates potential vulnerabilities in program management and oversight. Because of the program’s goal of helping ticket holders obtain and retain employment and reduce dependence on disability benefits, for inclusion in our report we focused on those portions of these three phone calls in which an EN representative discussed how to remain on benefits. The full transcripts of the three calls are provided in appendix VII. We conducted this performance audit from January 2010 to May 2011 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix III: Ticket to Work Payment Structure for Employment Networks EN payment amount for 2010 (in doll) $3,960 (p to 1yment of $220/month) $4,202 (p to 11 pyment of $32/month) $13,200 (p to 60 pyment of $220/month) $13,752 (p to 36 pyment of $32/month) (p to 60 pyment of $409/month) (p to 36 pyment of $711/month) Appendix IV: Range of Services Provided by Interviewed Employment Networks in 2009 and 2010 ■ Provide finncil incentive to employer to hire ticket holder ■ Directly employ ticket holder ■ Provide informtion on finncil incentive to employer or help them pply for finncil incentive■ Provide enefit nd/or work incentive coeling ■ Provide independent living ervice■ Provide finnciassnce or incentive to ticket holder■ Provide medicnd therpetic tretment nd ervice■ Directly provide voctionassssment nd evuation ■ Develop ticket holders’ jo eeking kill (e.g., resume writing, interview kill) ■ Asst ticket holder in rting business or with elf-employment ■ Provide ervice fter the ticket holder i employed ■ Mtch ticket holder with pecific jobs ■ Provide link to jo rch engineor abase of jo informtion ■ Asst ticket holder in identifying nd ccessing riety of locsupport ervice (e.g., child cre or trporttion ervice) ■ Assss for nd provide asstive technology (e.g., custom compter interfcefor peron with phyicl or enory disabilitie) ■ Provide supported employment ervice (i.e., ticket holder with evere disabilitie re plced in competitive jobs with jo coche or triner who provide individualized, ongoing support ervice to id with jo retention) Nine of the 25 EN interviewed, inclding the one pictred above, primrily provide ervice over the telephone. Five of the 25 EN we interviewed primrily interct with ticket holder online, inclding thi one, which dverti work-from-home jo opening on it We ite. One EN relies equally on phone and online interaction to deliver services. Complete additional academic year of fll-time dy (none) Complete 2-yer progrnd earn deree or certificate (none) (none) (none) (none) Of the 16 employment networks (EN) called by an investigator from our Forensic Audits and Investigative Service team, 8 were contacted to clarify the services they provide (see app. I for more information on our scope and methodology). In five of the recordings or calls, the EN representatives discussed how work could affect benefits. In three of these, the EN representatives explicitly told the caller how to remain on benefits indefinitely while working. Because the program’s goal is to help ticket holders obtain and retain employment and reduce dependence on disability benefits, for inclusion in our report we focused on those portions of these three phone calls in which an EN representative discussed how to remain on benefits. The full transcripts of these three calls are provided below. Call 1: Caller is a GAO investigator phoning EN on behalf of his fictitious brother who is a ticket holder, to learn about the Ticket program and services provided by the EN. The EN representative describes how the EN assists ticket holders in finding part-time employment and tells the caller a Social Security Disability Insurance (DI) ticket holder may collect full monthly benefits indefinitely as long as he remains under the substantial gainful activity (SGA) earnings level. (Whereupon, an outgoing call was placed by the GAO Investigator to an EN representative.) (Phone rings.) EN REPRESENTATIVE: Good afternoon, (inaudible), (name) speaking. How may I help you? GAO INVESTIGATOR: Yeah, hi, um uh, is this—what’s—is this a company that helps disabled people? EN REPRESENTATIVE: Yes. GAO INVESTIGATOR: Okay. Uh, I want to talk to somebody, if I could, um, about, um—my brother is disabled, and I’m trying to help him. He’s trying to find a job, and I want to see what kind of services your uh company provides. EN REPRESENTATIVE: Okay. One second. What’s your name? GAO INVESTIGATOR: . EN REPRESENTATIVE: ? Hold on. (20 second pause.) EN REPRESENTATIVE 2: Hi, this is from . How can I help you? GAO INVESTIGATOR: Yeah, hi. I’m trying to get some information, if I could. My, my brother’s disabled, and he—he wants to try to go back to work part time. EN REPRESENTATIVE 2: Okay. GAO INVESTIGATOR: And I’m trying to— EN REPRESENTATIVE 2: Is he receiving SSI and SSD—or SSD? GAO INVESTIGATOR: Yeah, he’s receiving um SSD. EN REPRESENTATIVE 2: Okay. So then he would qualify then. Because the program is called Ticket to Work program, and the program is for people that’s getting SSI and SSD. So he would qualify and what would happen is they would look for part-time work for that individual, and he would keep half of the benefits. The benefits would not get cut off. So he would work part time, and then it would supplement the benefits. Um. GAO INVESTIGATOR: Okay. Now, do you all help him find a job? EN REPRESENTATIVE 2: Yes, we do. GAO INVESTIGATOR: Okay. All right, all right. Well, that’s good. EN REPRESENTATIVE 2: Yes. GAO INVESTIGATOR: And what other type of services do you guys provide? EN REPRESENTATIVE 2: Um. They have um, direct—(name), are they still doing counseling? EN REPRESENTATIVE 3: (off phone) EN REPRESENTATIVE 2: Residential? EN REPRESENTATIVE 3: (off phone) EN REPRESENTATIVE 2: Okay. EN REPRESENTATIVE 3: (off phone) EN REPRESENTATIVE 2: Okay. Because the caller wants to know, like, if they have any additional services that they have. So, it’s um residential counseling. GAO INVESTIGATOR: Residential—what’s that, residential counseling? What’s that? EN REPRESENTATIVE: Okay. Um. Residential counseling is um for people, they train you, you get certifications and everything, to work in—yeah, you work at a group home, residential areas, um. EN REPRESENTATIVE 3: (off phone) EN REPRESENTATIVE: Yes. EN REPRESENTATIVE 3: (off phone) EN REPRESENTATIVE: Yes. EN REPRESENTATIVE 3: (off phone) EN REPRESENTATIVE: Okay. EN REPRESENTATIVE 3: (off phone) GAO INVESTIGATOR: You’re, you’re talking to somebody else, I’m not hearing what they’re saying. EN REPRESENTATIVE: Oh, okay. Yes, the program entails where you could work in the group homes, residential areas. And um for like adolescents and stuff like that. That’s for the residential training that they have. And then if he had— GAO INVESTIGATOR: So they teach you— EN REPRESENTATIVE: If he, if he had any prior work experience, what they’d do is they’d look for the jobs that they either have on their resume, if they have one, or um they’ll like set up whatever, set that interview up for him to get the job. Because with most of the people that come in, they never had jobs before. You know, they’ve just, you know, been on disability. So, you know, we’ll add additional things, and we have resume specialists here, we have the job developers. This is pretty much a company that’s dealing with people with disability. So in order for you to qualify for the Ticket to Work program, you have to be getting SSI and SSD. So. GAO INVESTIGATOR: Mmm. Okay. And how much do you all charge for these services? EN REPRESENTATIVE: This is free. This is free. This is funded by the government, so everything is free. So what he would—oh, okay. Hello? GAO INVESTIGATOR: Yeah. You’re saying it’s free? I mean, you’re not, I gotta think, you’re not doing it for free. Do the payments go to you or something, and then—— EN REPRESENTATIVE: Well, this is a government-funded program, so I don’t—you, when you come in, you don’t have to pay no fee. This is not a temp agency where you have to pay a fee. GAO INVESTIGATOR: Okay. EN REPRESENTATIVE: So this is a service where it’s funded by the government, and it’s services of—you know—to the community where they help people with disability find part-time work. Because if they—if you get any full-time work, then you know, they’re gonna cut you off. So we’re not offering you full-time work. We’re helping you find part-time work. GAO INVESTIGATOR: Mmm, okay. All right, yeah. Because that way he avoids getting his payment cut off? EN REPRESENTATIVE: No, that’s not going to happen. No. GAO INVESTIGATOR: Okay, okay. Well, that’s good. And, and what kinds of jobs are you talking about here? EN REPRESENTATIVE: Well, they have maintenance, um, janitorial. They have, um, a list of jobs. Um, and like I was, right, like I was saying you to before, if he worked before, then they can help him on the jobs that he has on his resume. GAO INVESTIGATOR: Okay. EN REPRESENTATIVE: So if he did any kind of security or maintenance, whatever he would have on his resume, that’s the type of job that he would—that they would find for him. And also, he would have to let them know what he’s looking for, too. Because they’re here to help him— GAO INVESTIGATOR: Oh, okay. EN REPRESENTATIVE: So they—he has to give him an idea, or whoever comes with him would have to give the job developers an idea of what kind of work he’s looking for. GAO INVESTIGATOR: Okay. All right. And how does it—how do we get this started? Does he have to come in there, or can he just—you know he’s—I’m trying to help him here a little bit, but— EN REPRESENTATIVE: Yes. Yes. GAO INVESTIGATOR: What’s next? EN REPRESENTATIVE: Okay. Yes. He can come in. The days for that is . GAO INVESTIGATOR: Okay. And, and what happens at that time? EN REPRESENTATIVE: When he comes in, he has to bring a resume if he has one. If he doesn’t, it’s not a problem. His Social and birth certificate, and that’s it. GAO INVESTIGATOR: Okay, okay. All right. And um—All right. So if he gets a job and he’s working and all that, I assume that eventually his benefits will be cut off? EN REPRESENTATIVE: No. No. They will not be — because this is the Ticket to Work program, so this is um not like uh real employment. This is like we said, we deal with people with disability, so we get them part time work only, that—it would supplement. His benefits would be supplemented, but it would not get cut off. GAO INVESTIGATOR: Okay. EN REPRESENTATIVE: Now, if he’s making enough money, or if he’s working a full-time job where they’re gonna you know—of course, they’re gonna say “Okay, well you might not need assistance any more.” But if it’s, you know, part-time, and it’s not too much money, and th-this is not full time, then yes, he would qualify. GAO INVESTIGATOR: Okay. All right. So as long as he doesn’t make too much money, he won’t get cut off. EN REPRESENTATIVE: Exactly. EN REPRESENTATIVE: Do you have the address here? GAO INVESTIGATOR: Um, no, why don’t you give that to me? EN REPRESENTATIVE: Let me know when you’re ready. GAO INVESTIGATOR: Yeah, go ahead. EN REPRESENTATIVE: Okay. The address is GAO INVESTIGATOR: ? GAO INVESTIGATOR: Okay. All right. Good. Well, thank you very much. You’ve been real helpful. EN REPRESENTATIVE: You’re very welcome. GAO INVESTIGATOR: All right. Bye-bye. (Whereupon, the call was concluded.) Call 2: Caller is a GAO investigator phoning EN on behalf of his fictitious brother who is a ticket holder to learn about the Ticket program and services provided by the EN. The EN representative describes how the EN assists ticket holders in finding employment and tells the caller that a DI ticket holder may collect full monthly benefits indefinitely as long as he remains under the SGA earnings level. (Whereupon, an outgoing call was placed by the GAO Investigator to an EN representative.) (Phone rings.) EN REPRESENTATIVE: Ticket to Work, speaking. GAO INVESTIGATOR: Yeah, hi. This is ? EN REPRESENTATIVE: Yeah, absolutely. GAO INVESTIGATOR: Um, okay. Listen, I’m calling—I got your number off the EN directory. EN REPRESENTATIVE: Yeah, okay. GAO INVESTIGATOR: I’m calling on behalf of my brother. EN REPRESENTATIVE: Okay. GAO INVESTIGATOR: He’s disabled, and it looks like he’s going to try to get back to work. EN REPRESENTATIVE: Okay. GAO INVESTIGATOR: So, um, I’m trying to figure out what you guys do. EN REPRESENTATIVE: Um, well, let me ask you. Does he have any, uh, prior work history? GAO INVESTIGATOR: Yeah, yeah. He’s got experience working in, you know, office-type work. EN REPRESENTATIVE: Oh, really? GAO INVESTIGATOR: Administrative type stuff. Uh-huh. EN REPRESENTATIVE: When’s the last time that he worked? GAO INVESTIGATOR: It’s been like a year and a half, or so. EN REPRESENTATIVE: Yeah, that’s not a problem. Um. GAO INVESTIGATOR: What kind of jobs—do you have those kind of jobs? EN REPRESENTATIVE: Well, we don’t have a magic hat, you know? What we’re going to do—our position here is to, you know, work with our clients in—on a partner arrangement, to where we assist them, uh, in giving them job leads and helping them through the application process, and uh, help them through—you know—with interviewing, uh, tools and skills if they require that. Um, but we don’t—we’re not in a position where we simply go out and just get jobs for people. We don’t find that, uh, that it has a very high success rate, uh, simply because, um, because the individual that’s getting the job, they’re the one that has to perform. And they have to follow through. GAO INVESTIGATOR: Okay. EN REPRESENTATIVE: Well, what I will do is, we sign people on, and what I do is I go through and I create resumes for them, or update their current—or older resumes, help try to fill those gaps that are missing, so that they’re—they look proper when their employer looks at it. I help my clients do cover sheets to send out along with their resumes for, you know, job applications, and, uh, basically try to—and then I send them job leads all the time on an ongoing basis. So that’s one that’s really important, but it’s also important that the client does it as well. GAO INVESTIGATOR: Okay, right. EN REPRESENTATIVE: So it’s a partnership. I mean I need to see that the person is working with me, so that I know that, you know, my time that I’m investing in them, it’s gonna pay off, not so much for me, but for them in the end. Because it takes that individual to stay employed. I can’t, you know, call them every morning and tell them to get up and go to work. And so they have to have initiative on their own. And that’s how I really determine, really how much effort that I’m putting into each client, is whether they’re participating on their end as well. GAO INVESTIGATOR: Okay, well, he’s not lazy. He just was not physically able to—you know, he’s got a heart condition. That’s what the problem was. EN REPRESENTATIVE: Oh, I see. He was not physically able to do what? GAO INVESTIGATOR: Well, it was kind of just stressful for him, you know? I mean, you know, he gets—he just can’t take a lot of stress, basically. EN REPRESENTATIVE: Uh-huh. So as far as looking for jobs, or as far as maintaining jobs? GAO INVESTIGATOR: Yeah, probably maintaining jobs. EN REPRESENTATIVE: Uh-huh. GAO INVESTIGATOR: But now what do you all charge for your services? EN REPRESENTATIVE: Nothing. It’s free. The services are free, so long as the individual is eligible for the Ticket to Work program. GAO INVESTIGATOR: Yeah, he’s got the ticket. EN REPRESENTATIVE: Yeah, see, so. And if he’s already got, you know, previous job skills, it’s probably something that we’d be able to help him with. But he needs to really, you know, determine, you know, to what degree he’s able to work, or even wants to work. Because with any given situation, I mean, an employer’s gonna want to see performance, plain and simple. GAO INVESTIGATOR: Right. EN REPRESENTATIVE: And if—and if the individual is not performing, then it’s likely that they’re going to lose their—that position. GAO INVESTIGATOR: Right. He doesn’t have to work full time, though? EN REPRESENTATIVE: No, not at all. He can work part time. Um, but those—those jobs are—what—they’re probably more difficult to find, just because most employers are looking to fill a position, as opposed to finding two people to fill a position. GAO INVESTIGATOR: Mmhm. EN REPRESENTATIVE: But there are part-time jobs out there. I have a lot of clients that come to me and say “You know what, I don’t think I can work full time.” And so we just—we hit the dusty trail, and we just start hammering away, and looking until we find something that actually suits them. And the big thing is, really is, you know, what type of work that they’re looking to do. The clerical work, um, uh, I can find part-time clerical work, but in most cases it’s going to be in an office environment, a medical environment, or, uh, like an intake environment, like bringing in new memberships, like at clubs and stuff. And so all of those are going to have a certain degree of stress. I mean, no matter what. Because they’re multitasking. They’re having to greet people as they’re coming in, they’re having to answer the phone calls, they’re having to file and input intake information. So there’s a certain degree of stress with any of them. The ones that you want to stay away from most certainly are the law firms. The law firms are just—they’re chaotic. And I’ve had—I’ve placed people in those jobs before, and uh, and they don’t normally pan out, especially with people that have, uh, any type of mental disability. Um, it just gets way overwhelming for them. And it’s not like they don’t know how to do the work; it just becomes something that’s so overwhelming that it just becomes a stressful situation. GAO INVESTIGATOR: Yeah. I mean, it’s not the mental part for him. It’s more that the stress affects him. You know what I mean? EN REPRESENTATIVE: Right, yes. So, and it does. Stress affects us both mentally and physically. And, uh, so what it would be is just a means of being able to take the time, you know, look around, and interview jobs as well as they interview you, and find something that, you know, your brother feels like he would be comfortable dealing. And then all you can do is try it. And if it feels—if it works, then it does. And if not, then it doesn’t. And the Ticket to Work program is kind of designed—what benefit is your brother collecting? SSDI or SSI? GAO INVESTIGATOR: Disability. EN REPRESENTATIVE: Oh, Disability. So, so you’ve got all the perks that go along with the Ticket to Work program. There’s a—you can—you can earn up to S1,000 a month, and it doesn’t affect your benefit at all. GAO INVESTIGATOR: Oh, wow. Okay. EN REPRESENTATIVE: Yeah, so you can work basically any part- time job that’s being offered, for you know, from 7.50, which is minimum wage, up to around $10 or $11 an hour working part-time, and you’re not gonna exceed that. GAO INVESTIGATOR: And how long—I mean, if he gets a job and continues to work, I mean will—eventually will he be off of the Ticket to Work program? EN REPRESENTATIVE: No, no. It’s an ongoing thing. I mean, he’ll stay with us until he unassigns his ticket. GAO INVESTIGATOR: Oh, wow. EN REPRESENTATIVE: And what it is, basically, is—the Ticket to Work program is designed—I don’t know if you’re aware of continuing medical reviews? GAO INVESTIGATOR: Yeah, right. I mean, periodically— EN REPRESENTATIVE: Yeah, exactly. And those are one of the safeguards that—when you’re under the Ticket to Work program, those are basically put on hold. So they’re not subject to that anymore. And the service continues. So say your brother goes to work, and then that particular job doesn’t work out. Well, then he just calls (name) back up, and says “You know what, (name)? That one didn’t work out,” for, you know, whatever reasons. “I decided it just wasn’t a good fit,” or “It became too stressful,” or whatever. Then we just start again. GAO INVESTIGATOR: Mmhm. Yeah, but if he gets into a job that seems to work for him, and it’s not too stressful and—I mean, he can just continue to do that indefinitely, huh? And still receive the benefits of both Ticket to Work and—and—disability? EN REPRESENTATIVE: Exactly. Yeah. It’s a win/win situation. What it basically is, is the ticket—it’s like, if your brother had no prior work experience at all, they allow you like a trial work period, where it’s 9 months and you can make as much as you want and it doesn’t affect your benefit at all. And then after that, then it starts to affect your SSDI. And if you— and they consider anything over $1,000 a month substantial gainful activity. And if you were to go over that $1,000 a month, they would take the cash benefit away from your brother. And so my job is to—to look at what portions of the program are still available to your brother. He may have used those trial work months without ever knowing it. It goes from the date that he’s eligible to receive the benefit, or the date that he’s receiving the cash benefit. Any month that he worked over $1,000—or over $720 a month in gross income counts as one of those trial work months. And those really aren’t important so much like in your case, because your brother doesn’t want to go to work full time. So it’s not gonna be something that’s applied. What’s important for your brother to know is that right now, as of 2010, he can go out, work any job that he wants so long as he stays under the $1,000 a month, he gets his cake and eat it too. He gets the—he gets his wages, and he gets his full SSDI benefit, and the medical, and everything that goes along with it. And that can—that can go from today until your brother retires, or whatever. You know what I mean? GAO INVESTIGATOR: Mmhm. EN REPRESENTATIVE: Nothing’s gonna be affected. GAO INVESTIGATOR: Okay. EN REPRESENTATIVE: And that’s what most people come to me for, they—they come to me and say “You know what? I don’t want to lose my cash benefit. How do I do that?” And—and that’s exactly how you do that. SSI’s a lot different than what the SSDI is, but the SSDI has all the benefits of, you know, being able to work up to that 1,000 a month and not affect anything. GAO INVESTIGATOR: All right. So the problem really is, I mean, if he ended up working full time and making too much money, that’s where the problem comes in, huh? EN REPRESENTATIVE: Yeah, exactly. What ends up happening is that, you know, once he goes over the SGA, then Social Security looks at it and they go “Oh, hey, look. This guy’s working at—now he’s making $1,500 a month, or $2,000 a month.” They then look at that, and consider that self-sufficient in the eyes of the government. And—and then they will eliminate the cash benefit. But all of his medical and everything stays in place. That—that will continue, uh, I think it’s like 93 months. It’s like 8 years, I think it is, it continues. And then at some point that would be affected, but that’s only if he’s working above the substantial activity, which is over $1,000 a month. But the—I think most—you know that, to get into this—either they’re going to go full at it, and they’re fully capable, physically and mentally, to go back into the work system full time and not worry about the SSDI, because they can make much more working full time. Or they have the other disposition, whereas “I don’t think I’m ever gonna wanna work full time. I just want something to supplement my benefit.” GAO INVESTIGATOR: I see. Okay. EN REPRESENTATIVE: That’s the two sides of the coin. That’s basically the only two sides that are there. One is you’re either satisfied supplementing, or you apply yourself to the point where you just simply get off of it and you’re happy because you’re making—how much is your brother’s SSDI amount per month? GAO INVESTIGATOR: Oh, man. I’ve got to check with him. I’m not even sure. EN REPRESENTATIVE: Yeah. GAO INVESTIGATOR: I mean, I’m helping him out, but you know, I don’t know all his affairs, you know what I mean? EN REPRESENTATIVE: Right—yeah, yeah. And the thing of that is, is people that are only getting—if you have—SSDI is based on work history. So if you’ve got a lot of work history, then it means you paid in a lot to Social Security, and that’s what dictates what that cash benefit is from SSDI. So, you know, I’ve got some people that come to me and they say “I’m collecting $2,500 a month on SSDI.” And I’m like “Why the Hell would you want to go to work?” GAO INVESTIGATOR: Yeah. EN REPRESENTATIVE: You know what I mean? Those are hard cases, because I have to go out and find that person a job that makes—that wants to go to work full time, that makes more than that $2,500 a month. Otherwise it doesn’t make any sense to get off the benefits. Just like being on unemployment. If you’re making, you know, you know, $2,000 a month on unemployment and you can’t find a job—full- time job that pays you more than that, what’s the incentive to get off of it? Financially, it doesn’t make any sense. So those that are below, say, $1,000 a month on that SSDI benefit, if they want to go to work full time it makes sense, because they can make $3,000 a month or $2,000 a month, and who cares about the 900? You’re already 1,000 ahead of the game, plus you have your medical. GAO INVESTIGATOR: Mmhm, mmhm. EN REPRESENTATIVE: But those that are making you know, $1,000, maybe $1,500 or so on the cash benefit, and—and they’re not able to work full time, then it benefits them just to work part time and supplement that SSDI, and be happy with that. But then that’s $1,000 a month, or $800 a month, or whatever, in your pocket every month, and not have it affect your benefit, you know? GAO INVESTIGATOR: Mmhm, mmhm. All right. Well, sounds good. So the next thing for him to do for you all would be what? EN REPRESENTATIVE: Um, uh. What I would want to see is really a detailed picture of what his past work history was. And, uh, what positions that he held, and for the lengths of time that he held. GAO INVESTIGATOR: Uh-huh. EN REPRESENTATIVE: And then to find out what his cash benefit is, so that we know what we’re working with. Um, and if he’s interested in putting himself back to work part time, then I can take that—most of that information I can get over the phone, um, and kind of—kind of put together a little picture for myself of—of, you know, where your brother is on his benefit, and what his past work history and stuff is. Then I can call and find out whether his ticket’s available for assignment, which I’m sure it is. Has he gone to any DVRs [state vocational rehabilitation agencies], or any other employment networks at this point? GAO INVESTIGATOR: No—no. EN REPRESENTATIVE: Yeah. And how long has he been on the cash benefit, receiving the SSDI? GAO INVESTIGATOR: Um, it’s been maybe a couple years, maybe. EN REPRESENTATIVE: Yeah, okay. GAO INVESTIGATOR: Somewhere in that neighborhood. EN REPRESENTATIVE: Yeah. And then if he’s really interested in, you know, going out and finding himself a job, then, you know, I can schedule an appointment. He can come in, we can fill out the paperwork. There’s only a few forms to fill out. Um, and then we can go ahead and start with preparing, you know, resumes and start the job searching process. And it’s basically just an ongoing thing. Every single day, I have a list of clients that are looking for employment. I go through probably 50 or so job sites that are offering employment, and try to match people up. And at the same time, they’re looking also, you know what I mean? To see what’s out there. And I suggest that they do, just because I have people come to me and go “I want a job in data entry.” And I go, “All right, but do you know how many of those jobs are out there and what they’re looking for to fill those positions?” And if they don’t, then they get restless with me, and they go “Hey, how hard can it be?” Well, it’s not hard. There’s thousands of data-entry jobs out there. But each one of those data-entry jobs are looking for specific skills that they want to fill. And some of my clients that want to do that, they want to be in a situation where they’re not pressured, where they’re not dealing with the public so much. Um, but those types of companies are like coding companies, and—like medical coding and billing companies. That’s data entry. Well, you need to be certified to do that. GAO INVESTIGATOR: Gotcha. Well, listen. Let me—let me have him give you a call. EN REPRESENTATIVE: That would be perfect. GAO INVESTIGATOR: I was just trying to kind of, you know, screen through some of these, because there’s— EN REPRESENTATIVE: Exactly. GAO INVESTIGATOR: —got just a bunch of numbers off the directory. All right. Well, thanks very much. I appreciate it. EN REPRESENTATIVE: You’re very welcome. GAO INVESTIGATOR: All right. Bye. EN REPRESENTATIVE: Bye. (Whereupon, the call was concluded.) (Whereupon, an outgoing call was placed by the GAO Investigator to an EN representative.) (Phone rings.) EN RECORDING: You have reached . is a non-profit organization authorized to work with Social Security beneficiaries under the Ticket to Work program. Our costs are covered by government funds. No fees are charged to individuals with disabilities. Please listen to all of our menu options, and then press the designated key. For information on the types of home-based jobs available through [EN name], press one. For information on how you can work part-time and continue to collect Social Security disability benefits, press two. For information on the qualifications needed in order to hold an home-based job, press three. For information on the equipment you will need to work from home, press four. For information on how you can obtain training and equipment from your state vocational rehabilitation agency if you do not have the required skills or equipment, press five. For information on how to apply to for a home- based position, press six. (Call redirected after pressing 2.) EN RECORDING: About 70 percent of the home agents working through receive Social Security benefits. Most receive SSDI, which means they are allowed to earn up to $900 per month if they have a general disability and $1,500 per month if they are blind. As long as SSDI recipients remain under those earning limits and their disability does not improve, they can work part-time and continue to collect their full monthly SSDI check indefinitely. For those receiving SSI, the rules are different. Those on SSI will lose approximately 50 cents of their SSI check for every dollar earned from a job. More details are available on our Web site, . I’ll spell that. To return to the main menu, press zero. (Call redirected.) EN RECORDING: You have reached (Call redirected.) EN RECORDING: To apply for ’s home-based jobs, you must go to our Web site, which is . I’ll spell that. And complete an online application. If you do not currently have a computer or Internet access, go to your local library or use a friend’s system to apply. If you are given a job offer, chances are very good that your state VR agency will provide you with the tools you need to perform the work. Again, the Web site for is . To return to the main menu, press zero. (Whereupon, the call was concluded.) AATakeCharge Milestone, LLC Adelante Development Center, Inc. American Rehabilitation Corporation ARG, LLC Arizona Bridge to Independent Living Asian Rehabilitation Service, Inc. Bureau of Rehabilitation Services, Connecticut Department of Social Services Bureau of Vocational Rehabilitation, Division of Career Technology and Adult Learning, New Hampshire Department of Education Cerebral Palsy Research Foundation of Kansas, Inc. Diagnostic Enterprises, Inc. disABLEd WORKERS, LLC Division of Vocational Rehabilitation, Vermont Agency of Human Services Employment Options Louisiana Rehabilitation Services, Louisiana Workforce Development, Louisiana Workforce Commission National Telecommuting Institute, Inc. Oklahoma Department of Rehabilitation Services Relational DataSearch Rewards for Working, Inc. Service First of Northern California TakeCharge Vocational Rehabilitation Services, LLC (AAA) The Workplace CA Ticket to Work Services, LLC Tulare County Office of Education Vocational Rehabilitation Services, Bureau of Rehabilitation Services, State of Indiana Walgreen Co. Jeremy Cox, Assistant Director, and Cady S. Panetta, Analyst-in-Charge, managed this report and Kristen Jones made significant contributions to all aspects of the report. Other staff who made key contributions to the report include Wesley Sholtes and Margeaux Randolph. Luann Moy and Vanessa Taylor assisted with the methodology and data analysis. Craig Winslow provided legal assistance. Paul Desaulniers provided investigative assistance. Susan Aschoff and James Bennett helped prepare the final report and the graphics.
The Social Security Administration (SSA) pays billions of dollars in Disability Insurance and Supplemental Security Income to people with disabilities. The Ticket to Work program, established in 1999, provides eligible beneficiaries (ticket holder) with a ticket they may assign to approved service providers, called employment networks (EN). ENs are to provide services to help ticket holders obtain and retain employment and reduce dependence on SSA benefits. ENs receive payments from SSA once a ticket holder has earnings exceeding a set threshold. Due to low participation, SSA changed program regulations in 2008 to provide ENs and ticket holders with more incentives to participate. GAO examined (1) changes in ticket holder and EN participation over time, (2) the range of service approaches used by ENs, and (3) SSA's efforts to evaluate ticket holders and ENs to ensure program integrity and effectiveness. GAO analyzed SSA data, policies, and procedures, and interviewed representatives of 25 ENs, disability advocacy organizations, and SSA. More ticket holders and ENs are participating in the Ticket to Work program since SSA revised regulations in 2008, but the overall participation rate remains low. Ticket holders assigning their tickets to ENs increased from about 22,000 in fiscal year 2007 to more than 49,000 as of July 2010. However, less than 1 percent of all ticket holders assigned their tickets to ENs and SSA has not yet studied whether regulatory changes enabled more ticket holders to obtain employment and exit the benefit rolls. During this time, ENs approved to serve ticket holders increased from 1,514 to 1,603, and SSA's ticket payments to ENs increased from $3.8 million to $13 million. However, 20 ENs, or less than 2 percent of those currently participating, have received the majority of total ticket payments from SSA. GAO found that ENs provide a range of services, including job search and retention assistance. Since the 2008 regulatory changes, which explicitly allowed ENs to pay ticket holders, an increasing number used service approaches such as sharing SSA's government-funded ticket payments with ticket holders. These ENs target ticket holders already working or ready to work, and accounted for a substantial and growing share of payments from SSA. Three ENs among those with the largest payment amounts reported providing limited or no direct services beyond passing back a portion of ticket payments to ticket holders who had sufficient earnings to qualify the ENs for payment. These ENs received a total of over $4 million in SSA payments-- nearly one-third of all SSA payments--in fiscal year 2009. Two of these ENs passed back 75 percent of SSA's ticket payments to ticket holders and kept the other 25 percent. The extent of these trends is unknown because SSA does not collect sufficient information on service approaches across all ENs. SSA lacks adequate management tools to systematically evaluate ticket holders and ENs. Since 2005, SSA has not consistently monitored or enforced ticket holders' progress toward self-supporting employment--a regulatory requirement. Ticket holders who show progress are generally exempt from medical reviews to determine their continued eligibility for benefits. Lack of systematic monitoring of timely progress has both program integrity and cost implications, such as the potential for ineligible beneficiaries to continue receiving benefits. During the course of GAO's review, SSA was beginning to resume the progress reviews, but it is too early to assess the effectiveness of these efforts. Moreover, SSA has not developed performance measures for approved ENs, as required by law, that can be used to assess their success in helping ticket holders obtain and retain employment and reduce dependency on disability benefits. Without such measures, multiple ENs communicate to ticket holders how to work and keep full disability benefits, despite the fact the ultimate goal of the Ticket program is to reduce dependence on benefits (to hear audio excerpts of GAO's calls with selected ENs, see http://www.gao.gov/products/GAO-11-324 ). Finally, SSA's EN approval process lacks systemic tools to ensure quality and clear and specific criteria for reviewing EN qualifications. GAO is recommending SSA take several steps, such as compiling service trend data and monitoring ticket holders' progress, to enhance program oversight. SSA agreed with two recommendations and offered alternative language for the other two to reflect actions it considers planned or under way.
Education’s changes to SIG requirements in 2010 have led to new responsibilities for the agency, states, and school districts. These entities all play key roles in the SIG award and implementation process, with Education supporting and overseeing state SIG efforts. Before awarding formula grants to states, Education reviews each state’s application and approves the state’s proposed process for competitively awarding SIG grants and monitoring implementation. As part of the state application process, states identify and prioritize eligible schools into three tiers: Tier I schools. Receive priority for SIG funding and are the state’s lowest-achieving 5 percent of Title I schools (or 5 lowest-achieving schools, whichever number is greater) in improvement status. Tier II schools. Secondary schools eligible for, but not receiving, Title I funds with equivalently poor performance as Tier I schools. Tier III schools. Title I schools in improvement status that are not Tier I or Tier II schools. After states receive SIG funding, school districts submit applications to states describing their SIG reform plans for eligible schools. Education has required that districts base their plans on an analysis of each school’s needs, called a needs assessment. After reviewing district applications, states distribute their SIG dollars using their approved competitive grant award process, giving priority to districts seeking funding for Tier I and Tier II schools. Education’s regulations and guidance require districts receiving SIG awards for Tier I or Tier II schools to implement one of four intervention models in each school. Select aspects of each model are as follows: Transformation. Transformation schools must replace the principal, implement a transparent and equitable teacher and principal evaluation system that incorporates student academic growth, identify and reward staff who are increasing student outcomes, and provide increased student learning time, among other requirements. Turnaround. In addition to implementing many requirements of the transformation model, turnaround schools must use locally adopted competencies to screen existing staff and rehire no more than 50 percent of the existing staff. Restart. The district must reopen the school under the management of a contractor, such as a charter school operator, charter management organization, or education management organization. Closure. The district must close the school and enroll its students in a higher achieving school within a reasonable proximity. Districts may choose to use contractors to implement aspects of their reform plans. Schools enacting a restart model are required to contract with an organization that will assume many of the decision-making and leadership functions in that school. Districts employing other models may also contract with external organizations for services that could include data analysis, teacher professional development, and efforts to create safe school environments. Our work notes the importance of screening potential contractors before awarding contracts, as well as regular evaluation in order to ensure contractors are providing timely and quality services with government funds. In addition to reviewing district applications, states are also responsible for monitoring grant implementation. States make decisions about whether to renew funding for each SIG school for an additional year, based on factors such as whether schools meet annual student achievement goals that districts set for the schools. Pursuant to Education’s guidance, if a school meets its annual goals, the state must renew the school’s SIG grant. If a school does not meet one or more annual goals, Education’s guidance gives states the flexibility to consider other factors such as the “fidelity with which the school is implementing” its chosen intervention model. Education provides states with technical assistance and oversight regarding SIG implementation. For example, Education funds 21 Comprehensive Centers that help build states’ capacity to assist school districts and schools. Sixteen of these organizations serve states in designated regions, and 5 provide technical assistance on specific issues, such as teacher quality. In addition, Education funds Regional Educational Laboratories, a network of 10 laboratories that serve designated regions by providing access to applied education research and development projects, studies, and other related technical assistance activities. Education also monitors states’ implementation of SIG. This monitoring process consists of visits to selected states and several SIG districts and schools within the monitored states, followed by reports documenting any findings. States have an opportunity to respond to any findings before the release of Education’s monitoring reports. States have awarded funding to two cohorts of schools since the program was modified and expanded in 2010. In the first cohort, 867 schools received SIG funding to implement one of the four intervention models in SY 2010-2011, and in the second cohort 488 schools received funding to implement one of the intervention models in SY 2011-2012. Seven have received waivers from Education to delay awarding funding statesto their second cohort of schools until SY 2012-2013 because of various issues, such as turnover of key staff in state educational agencies. The proportion of schools choosing each model was similar in both cohorts and, as shown in figure 1, most schools chose to implement the transformation model. Although most states have increased the amount of staff time devoted to SIG since the program was expanded, some states have struggled to develop the necessary staff capacity to successfully support and oversee SIG implementation because of budget constraints. In our survey, 29 states told us that they have increased the staff time devoted to SIG since they first applied for SIG funds in the expanded SIG program. However, officials from four of the eight states we visited—California, Nebraska, Rhode Island, and Texas—told us that because of budgetary constraints, the time staff could devote to administering the SIG program and monitoring district implementation was significantly limited. For example, officials in California said that as a result of the state budget crisis, the state legislature reduced the amount of SIG funds available for state administration from the allowable 5 percent to 0.5 percent, limiting the number of staff available to administer the program and monitor districts. Several state officials we spoke with also reported that their existing workloads made it difficult to focus on SIG. For example, in several states the program was administered by officials who also had responsibilities for other major education programs, such as Race to the Top. In addition, state officials sometimes did not have expertise in supporting school turnaround efforts. Officials from Education and several states and research groups told us that SIG required states to support local reform efforts to a much greater extent than they had in the past, and staff in some states had not yet developed the knowledge base to fulfill these responsibilities. Even when states were able to develop expertise in school reform and hire necessary staff, officials from Education told us that personnel turnover in many states made it difficult to retain such knowledge. For example, Rhode Island officials said they encountered difficulties filling vacated positions because many nearby states were also recruiting from the same small pool of qualified applicants. Several states increased their capacity through actions such as contracting with nationally recognized experts to help them run their grant competitions, establishing school turnaround offices, or hiring turnaround specialists that regularly handled an individualized caseload of SIG schools. For example, 18 states created new turnaround offices to help districts implement SIG, according to our survey. In addition to these state level issues, many districts also struggled to develop the necessary staff capacity to implement successful school reforms. It was particularly difficult for schools to recruit and retain qualified staff members, according to many stakeholders, including officials from several states and districts we visited. They told us that SIG schools were sometimes in rural areas or needed staff to have expertise that was in short supply, such as experience with reform or specialized academic subjects. Among the 12 states that Education monitored during SY 2010-2011, they found that 4 states did not ensure that turnaround schools met requirements to remove at least half of the schools’ staff and hire new staff for those positions based on staff effectiveness. For example, Education found that one monitored district in Minnesota did not base hiring decisions on prospective teachers’ instructional effectiveness. In addition, Education found that three of the monitored states did not ensure appropriate replacement of the principal in turnaround or transformation schools. Moreover, some districts did not have staff with expertise in using performance and evaluation data—such as data on student performance—to inform plans for reforming schools and ongoing instructional improvements. Education officials said that, in many cases, school district staff were able to collect data, but did not have experience linking data to needed interventions. In addition, our review of the needs assessments districts were required to develop when planning SIG interventions showed that some were more extensive than others. Also, in one district we visited, the new teacher evaluation process did not include state assessment data on student achievement as one of the evaluation criteria, as required by Education. Several states, districts, and researchers identified promising practices for recruiting and retaining staff or improving data usage, such as developing “grow-your-own” leadership programs, conducting priority hiring for SIG schools, or hiring data coaches to help teachers collect and analyze student data. Districts also varied in their commitment to use SIG funds to enact major reforms. According to our survey, 35 of 51 states awarded grants to all or most Tier I applicants who applied for grants starting in SY 2010-2011, but several officials from states we visited and research organizations reported that some districts receiving SIG grants were not prepared to make significant reforms. For example, officials in one large school district we visited told us they followed turnaround model requirements to rehire no more than 50 percent of teachers at a SIG school. However, the district officials said they relocated the released teachers to other SIG schools in their district because those schools had almost all of the vacancies. Similarly, in two states we visited, district officials moved a school’s previous principal into another leadership position on site so that person could continue to work in the school even after a new principal was assigned. State and district officials also cited instances where districts chose their SIG model for reasons other than its likelihood of improving student success. For example, the superintendent in one district told us they chose the restart model because they considered it less restrictive than other models. Although many states responding to our survey told us that all or most of their transformation model schools were operating very differently after the first year of SIG, 33 states said that at least some of these schools choosing the transformation model were not. Figure 2 shows responses from these 33 states about whether inadequate action by SIG schools or districts was a reason the schools were not operating very differently. increased learning time requirements in about half of the states that it monitored during SY 2010-2011 and in both states for which it had completed SY 2011-2012 monitoring reports by February 2012..According to district officials, at least half the districts we spoke with will not have fully implemented new teacher evaluation systems by the end of their second year of SIG. In addition, during its SY 2011-2012 monitoring visit to Iowa, Education found that student growth was not always incorporated in new teacher evaluation systems, as required. Our analysis showed that increased learning time and teacher evaluation requirements were challenging because the planning needed to implement them was complex and time-consuming, and stakeholders, such as unions and parents, were sometimes reluctant to embrace the changes. Some districts struggled to develop increased learning time initiatives that would be sustainable after their 3-year SIG grant ended. More specifically: Interventions required extensive planning. Effectively implementing increased learning time and teacher evaluations required extensive planning. Several stakeholders stressed the importance of carefully designing increased learning time schedules because, for the intervention to be successful, it must provide quality instruction rather than simply increasing the amount of poor instruction. Officials from several districts said they were unable to fully implement their plans for increased learning time at the beginning of the first year of increased SIG funding because, for example, they first needed to fully analyze their existing schedules and curricula and adapt them to meet SIG requirements. Officials from states and districts we visited often stressed that developing a teacher evaluation system is time- consuming because it requires districts to accurately and comprehensively identify, collect, and analyze information about teachers’ performance and students’ academic growth. In response to the challenges involved in planning and implementing teacher evaluation systems, Education allowed states to apply for a waiver to extend the planning period for this requirement, and 27 states applied for and received the waiver as of February 2012. In its final SIG requirements, Education required schools implementing the transformation model to implement new teacher evaluation systems within the first year of the grant. Districts in states receiving these waivers must develop their evaluation systems during SY 2011-2012; pilot or fully implement them by SY 2012-2013; and use them to make decisions about retention, promotion, and compensation by SY 2013- 2014. The timeline is the same regardless of whether the SIG schools in the district are from the first or second SIG cohort. Stakeholders sometimes reluctant to embrace required changes. Implementation was also delayed or otherwise challenged by concerns from various stakeholder groups. Teachers and teachers’ unions were sometimes concerned about increasing student learning time or implementing new teacher evaluations in SIG schools, according to Education, state, and district officials. For example, these officials said unions were concerned about whether teacher evaluation systems that incorporated student academic growth could do so in a manner that would not penalize teachers working with the most challenging students. Such concerns sometimes led to delays in finalizing evaluation systems. In a few cases, officials told us that other stakeholder groups such as parents and school board members were also resistant to SIG requirements. For example, an official in Virginia said that some schools trying to increase learning time had met resistance from parents because students often had jobs or responsibilities at home once the traditional school day was over. Difficulty designing sustainable approaches for increasing learning time. State and district officials also questioned whether increased learning time initiatives would be sustainable after SIG funds were exhausted. For example, survey respondents from 26 states said the costs of increased learning time were unlikely or very unlikely to be sustainable after the SIG grant ends, compared with 10 states that reported it was likely or very likely to be sustainable. Rhode Island officials noted that increased learning time benefits students enrolled in SIG schools during the grant cycle, but state and local financial constraints will make it difficult to sustain the increased learning time for future students. Due in part to these concerns, one of the two districts in the state with SIG schools limited the amount of learning time it added in order to avoid significant cuts in this time after grant funding ends. While many officials stressed the complexity of effectively implementing these requirements, some states and districts that we visited found ways to address the challenges they posed. This was particularly true in districts that had started to plan for and implement similar school reforms prior to applying for SIG funds. Many officials from Education, state, and districts stressed the importance of stakeholder involvement while designing and implementing SIG reforms in order to enhance buy-in and strengthen reform initiatives. In order to increase students’ learning time without increasing teacher workloads and salaries, Education officials and researchers told us that a few districts were working with community partners to fund or staff additional learning time or were staggering teachers’ schedules so that students would be in class longer but teachers would not. In addition, a few states developed sample teacher evaluation system that met SIG requirements so that districts could use it as a framework for developing their own systems. States often had limited evidence for making decisions about whether to renew schools’ SIG funding. For example, in our survey, officials from 10 states told us that they did not use schools’ achievement of annual goals to make grant renewal decisions after SY 2010-2011. According to state officials, at least half the states we interviewed did not have the annual student achievement data available at the time they had to make renewal decisions because assessment results only became available at the end of the summer. Officials from two of these states told us that timely access to annual achievement data will continue to be a problem in future years. In addition, even when these data were available, states frequently chose not to base their decisions on schools’ achievement of annual goals. Twenty-three of 44 states responding to our survey question said that, among schools that had their funding renewed, all or most did not meet their annual goals. Several officials from our site visits questioned the usefulness of annual goal data in determining whether progress was made, particularly because districts set their own performance targets. For example, California officials said they did not find annual goals data useful because districts often included generic annual goals in their applications for SIG funding instead of proposing goals based on schools’ unique circumstances. Regardless of whether annual goals information was available, states almost always considered “fidelity of implementation”—the extent to which the school is implementing the requirements of its intervention model—when making grant renewal decisions. However, states did not always base decisions about this criterion on extensive information. In our survey, 48 of 51 states identified fidelity of implementation as an important factor in their decision-making process, more than any other factor. Several states we spoke with said that qualitative information about implementation was important for assessing grant progress because the first steps of school reform, such as efforts to change school culture, do not always result in measurable student achievement gains. However, making this assessment can involve a high degree of subjectivity and states’ determinations were not always developed based on extensive interaction with schools or systematic monitoring of their implementation efforts. For example, officials in California told us they used fidelity of implementation as their key criterion for making grant renewal decisions, and that the primary method for evaluating this criterion was one telephone conversation with each district at the end of the year. Prior to those conversations, the state had limited interaction with most districts for the purpose of assessing their implementation and was unable to conduct SIG monitoring visits for budgetary reasons. In addition, a Virginia official told us the state used fidelity of implementation for making renewal decisions but would benefit from guidance on how to define and measure it. States were in some cases reluctant to discontinue SIG funding even when information they collected showed that schools were not implementing key requirements with fidelity. Several officials from states we visited said they renewed all schools’ SIG funding even if the schools were struggling to fulfill key SIG requirements because tight implementation timeframes made the officials reluctant to eliminate funding after the first year of the grant. In our survey, 21 states reported that half or fewer of their Tier I and Tier II schools were able to implement major aspects of their plan by the beginning of SY 2010-2011, such as extending the school day or having new staff in place. In the 19 cases where these states had made renewal decisions, the state renewed all or most grants. Furthermore, officials in several states we visited identified instances where they chose to renew schools’ funding despite significant problems at the district or school level, such as having administrators who were not committed to enacting major reforms or were not ensuring that planned reforms were fully implemented. For example, officials from Nevada said they renewed such grants after the first year because they did not want to negatively impact students and teachers when significant district-level problems were outside their control. Although Education reviewed states’ proposed grant renewal procedures through the state SIG application process, the agency did not provide written guidance after grant renewal challenges arose. Education required states to submit their renewal processes for review as part of their SIG applications. Nonetheless, in several state applications we reviewed, descriptions of renewal processes and criteria did not align with the practices the state actually implemented. For example, states that told us they were unable to use annual goals data to make renewal decisions had originally identified these goals as a key renewal criterion in their applications to Education. In its work with states, Education officials told us they found some had difficulty using annual goals data or fidelity of implementation and that the agency provided technical assistance to several states that asked for help. However, agency officials were not aware of how states ultimately addressed these issues, and said the agency has not provided any additional technical assistance on grant renewal. States renewed almost all SIG grants at the end of SY 2010-2011, and in some cases imposed conditions on schools for renewal. According to Education, 39 states chose to renew funding to every SIG school in their state. Eleven states and the District of Columbia chose not to renew funding to one or more schools, for a total of 16 nonrenewed schools overall.problems with fidelity of implementation. Several states we spoke with chose to renew grants with conditions or required changes. For example, officials in Ohio told us that struggling schools were required to take corrective actions in the second year of the 3-year grant and that their level of success in taking such actions will be a key criterion in future renewal decisions. In addition, New York officials renewed all grants after SY 2010-2011 under the condition that transformation and restart schools would implement state and federal SIG teacher evaluation requirements by December 30, 2011. Once that deadline passed, state officials determined that no districts had met the requirements and suspended all SIG funding until they were met. In February 2012, the state commissioner reinstated funds to half of the SIG districts after determining that the districts had made the necessary changes. In our survey, 23 states reported that at least a few of their SIG schools were required to make major changes to their SIG plans as a condition of having funding renewed. Contractors provide a wide range of services with SIG funds, and school districts have often given contractors major roles in schools using the restart, turnaround, and transformation models. Education’s guidance identifies a clear role for contractors in schools using the restart model. Specifically, districts must hire a contractor to take over school operations. For example, in the Los Angeles Unified School District, the Partnership for Los Angeles Schools has been given full management authority over five restart schools. In contrast, Education allows districts with schools using the turnaround and transformation models—which include more than 90 percent of schools receiving SIG funds—to use contractors, but does not identify a specific role for them. Most turnaround and transformation schools we visited were working with contractors. Although in some cases turnaround and transformation schools used these contractors for minor tasks, in other cases the contractors played a major role in school operations. For example, in Virginia, the state required schools implementing the turnaround and transformation intervention models to use a contractor for a range of services that could include improving teacher performance, principal and management leadership, or changing school culture. Among the school districts we visited, several planned to spend significant amounts of their SIG grants on hiring contractors. These included districts using the restart, turnaround, and transformation models. For example, a district with one SIG school using the transformation model planned to spend about $450,000 for contractors in one school year. In addition, a district that we visited with three SIG schools planned to spend approximately $1.5 million on contractors over the 3-year period for services that included data analysis and curriculum planning. Our prior work and reports regarding services acquisition have shown the importance of building safeguards into acquisition processes to ensure accountability. These leading practices include screening potential contractors prior to award using a thorough selection process that evaluates their ability to achieve results and the contractors’ past performance. Once a contractor has been selected, officials should routinely review contractors’ work to help ensure they are providing timely and quality services and to help mitigate any contractor performance problems. Education required and states reported requiring that potential contractors be selected after a thorough screening process. Education required that either states or districts screen contractors prior to contract award to ensure their quality. Although Education’s guidance does not provide specific criteria for approval, Education requires each state to describe in its state application how it will ensure that school districts screen contractors. Each of the eight states we reviewed required districts to describe their plans to screen contractors in their applications for SIG funding. In addition, states varied in how they approached contractor screening at the state level, either taking an active role in the process or delegating screening responsibilities to districts. According to our survey, 17 of the 51 developed approved lists of contractors from which districts could choose. For example, Virginia officials told us they enacted statewide contracts with four organizations, and strongly encouraged districts to choose one of those four organizations. Ohio officials said they developed a list of approximately 100 state-screened organizations from which districts could choose, but districts were free to use other contractors, provided that they screened those organizations. States that we visited that did not develop a list of approved contractors reported requiring districts to screen contractors. For example, Texas officials told us they required all districts to use a formal competitive process in selecting contractors, which included a process to evaluate contractor proposals, in order to be approved by the state. Education’s monitoring protocols for the SIG program require the review of contractors in schools using the restart model, but they do not require review of contractors during contract performance for the other school improvement models. Education’s protocol for monitoring states’ SIG implementation asks whether districts have included accountability measures in the contract for restart schools and also asks for the district’s current assessment of the contractor. The protocol does not include a similar question for turnaround and transformation schools. States varied in their approaches to the review of contractors, and in some cases reported that they did not require that districts review contractors during contract performance. Among the eight states we spoke with, none assessed districts’ plans to review contractors in their SIG applications. In addition, several states reported not having any state-level review requirements. For example, Nebraska state officials said their districts conduct informal reviews of the contractors, but the state does not require reviews or provide districts with a formal process or metrics to assess performance. Similarly, in follow up calls for our state survey, state officials in several states said they do not require districts to review contractor performance and were unaware of whether districts conducted any reviews. In contrast, Nevada officials told us they require districts to add accountability steps for contractors in each phase of work. Inconsistent review of contractors during contract performance reduces states’ and districts’ ability to ensure that they are receiving the services they have paid for. In our work, one stakeholder told us that in the absence of stronger guidance or oversight, the extent to which contracts include accountability measures is largely dependent on the knowledge and experience of the individual contract manager. Although some district officials in our site visits described efforts to include accountability measures or regular review in the contracts, others indicated that contractors are reviewed informally, if at all. Education’s guidance and technical assistance on SIG implementation was well received by nearly all states. In our survey, nearly all states responded favorably about Education’s guidance and various technical assistance offerings for SY 2011-2012. Most states reported that Education’s guidance and technical assistance were helpful and many reported they were very helpful (see fig. 3). In our survey, we also inquired about the amount and timeliness of guidance provided by Education. Forty-one states reported that Education provided about the right amount of guidance for the second year of SIG. In addition, 33 states responded that in SY 2011-2012, Education’s guidance was timely, allowing the state to meet its needs, while 14 states commented that the guidance was not timely. Although most states told us that Education’s guidance was helpful, some identified additional technical assistance that would assist with SIG implementation. In an open-ended question on our survey that asked about the types of additional guidance that Education could provide, 15 states indicated they wanted additional information about other states’ SIG implementation efforts that are working well. Several states that we met with also mentioned wanting more information on successful and sustainable implementation strategies, proven contractors, increased learning time strategies, and teacher/principal evaluation systems. To provide additional support and enhance information sharing among the states, Education has recently begun three new assistance efforts. First, Education selected nine states to participate in the SIG “implementation support initiative” as an optional technical assistance resource. Under this initiative, each participating state receives a visit from an Education representative as well as officials from the eight other participating states. These site visits have two purposes—first, to provide technical assistance to the states, and second, to enable states to engage in peer-to-peer information sharing. Education reported that they have used information from these site visits to produce targeted technical assistance reports. Second, in December 2011, Education began conducting monthly check-in calls with state officials to better manage SIG implementation. Each state was assigned an Education program officer responsible for providing oversight and technical assistance support, including outreach and monthly check-in calls. Lastly, Education launched the School Turnaround Learning Community—an on-line forum to provide states and districts with access to resources and to facilitate networking. According to Education, this initiative offers research-based practices and practical examples from states, districts and schools for developing and implementing SIG. Education’s oversight strategy is to monitor all states during the 3-year period—starting with SY 2010-2011—in which the first cohort of schools will receive SIG funding. In selecting states for on-site monitoring for SY 2010-2011, Education did not use a SIG-specific risk-based approach and instead used the existing Title I monitoring schedule.to resource constraints, Education suspended its Title I monitoring and However, due instead focused exclusively on SIG monitoring. Education also delayed SY 2010-2011 monitoring to allow states and districts time to implement SIG before beginning monitoring in February 2011. For SY 2010-2011, Education conducted on-site monitoring in 12 states uncovering 28 deficiencies. At least one deficiency was identified in 11 of the 12 monitored states, with California and Pennsylvania having the most deficiencies with seven and five, respectively. Half of the monitored states had deficiencies in ensuring appropriate district implementation of the increased learning time requirement. In addition, two states did not ensure that all SIG funds were used consistent with the SIG requirements. In SY 2011-2012, Education selected states with a risk based approach tailored for SIG based on factors such as the size of a state’s SIG grant. For SY 2011-2012, Education officials initially selected 12 states to conduct on-site monitoring. As of February 2012, Education had issued SY 2011-2012 monitoring reports for Iowa and Florida, containing seven and two deficiencies respectively. For example, in Iowa, Education found that funds were not used consistently with SIG grant requirements nor was the state monitoring SIG as written in its approved SIG application. Education also set aside a portion of its oversight resources so that additional states could be selected for monitoring as more information became available. As of February 2012, Colorado and South Carolina were also selected to receive an on-site review. To maximize its oversight resources, Education plans to conduct some limited “desk monitoring” in five additional states in SY 2011-2012.monitoring protocol is similar to the on-site visit protocol, but—unlike the on-site monitoring—does not include interviews with school officials. Finally, Education officials told us that they plan to monitor the remaining states in SY 2012-2013, and that these states represent a small percentage of SIG funds. Dramatic funding increases in a short period of time—such as those made to SIG—can subject federal programs to considerable financial risk. While states and school districts carry a large share of the responsibility for planning and implementing successful SIG reforms, Education also plays a critical role in supporting these efforts and mitigating risk through strong oversight and accountability. For example, it is important that Education have rigorous processes for reviewing state SIG applications, conducting oversight, and providing technical assistance when needed. The ability to successfully carry out these functions is vital to ensuring the long-term success of the SIG program and protecting taxpayer funds from waste and abuse. Although SIG has been challenging to implement, in part due to the short implementation timeframes we highlighted in our July 2011 report, Education has reviewed state SIG applications, distributed funds to states, begun its monitoring activities, and provided technical assistance. However, the agency’s guidance in some cases has not been sufficient to ensure that schools and contractors are fully accountable. For example, given the implementation issues we and Education’s monitoring have found, it is critical that states have rigorous SIG grant renewal procedures in place to identify schools that are not making progress. Education has provided limited guidance to states about how to make renewal decisions. Some states are using highly subjective review processes to renew nearly all grants, often without key information on SIG schools’ performance. Until Education provides additional support about how states should make evidence-based renewal decisions when, for example, state assessment results are received too late to be factored into these decisions, schools that are not making progress may continue receiving SIG funds. In addition, although contractors are receiving large amounts of many schools’ SIG funds, Education has not ensured that states or districts review contractor performance during the terms of their contracts. Unless Education takes action to ensure that states or districts review contractor performance, districts may not receive an appropriate level of contractor services for their SIG funds and funds may not be well spent. To ensure that SIG grant renewal decisions serve to hold districts and schools accountable, we recommend that the Secretary of Education provide additional support to states about how to make evidence-based grant renewal decisions, particularly when states do not have annual student achievement goal information available at the time renewal decisions are made. To ensure that contractors hired with SIG funding are accountable for their performance, we recommend that the Secretary of Education take steps to ensure that the performance of SIG funded contractors, including those in turnaround and transformation schools, is reviewed during contract performance. In developing such requirements and to ensure that those reviews are targeted to contractors receiving large amounts of SIG funding, Education could consider setting a dollar threshold amount for contracts, above which contractor performance should be reviewed. We provided a draft copy of this report to the Department of Education for review and comment. Education’s comments are reproduced in appendix II. Education generally supported our recommendation about SIG grant renewal and outlined how the agency is planning to address this recommendation. Education did not agree with our draft recommendation that it should require states to ensure that the performance of all SIG-funded contractors be reviewed, including contractors in turnaround and transformation schools. In its comments, Education said that it believed that existing provisions and requirements address this issue appropriately. For example, Education cited a federal regulation that requires districts to follow their existing procurement procedures, and noted that districts and states have their own requirements for evaluating contractors to ensure accountability. Education also said that the type of evaluation process needed for a contractor should depend on the contractor’s role, and that contractors used by schools implementing the turnaround or transformation models may be working on small, discrete projects and may require less provider-specific reviews than contractors in schools implementing the restart model. We agree with Education that the need for performance reviews should be dependent on the specific role of the contractor, and we modified our recommendation to address some of Education’s concerns. Specifically, Education may wish to create a dollar threshold above which performance reviews are required. We continue to believe, however, that the current monitoring framework is inadequate. As noted in our report, schools implementing the turnaround and transformation models account for the overwhelming majority of SIG schools, and contractors operating in these schools are performing a range of functions, including some that are large or complex. In our view, there is a need for additional steps to ensure adequate review of contractor performance. Furthermore, our work shows that as a practical matter, states varied in their approaches to contractor review, with some imposing no requirements on districts. Education says that it will clarify in existing guidance the requirement for SIG recipients to follow state and local procurement procedures. Education could use this opportunity to implement our recommendation through additional guidance on contractor performance reviews. In addition, Education implied that our report was based only on the first year of SIG implementation. This is inaccurate. We also conducted interviews with all eight states, reviewed SIG documents, received finalized survey responses, and interviewed Education officials several times during the second year of implementation, thereby enabling us to reflect activities beyond the first year. Based on the number and significance of deficiencies identified in Education’s SIG monitoring reports—including some completed during the SY 2011-2012—as well as our own findings, we continue to believe that Education should take additional steps to increase program accountability. Education also provided technical comments that we have incorporated into the report as appropriate. We are sending copies of this report to relevant congressional committees, the Secretary of Education, and other interested parties. In addition, this report will be available at no charge on GAO’s website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7215 or scottg@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. This study’s objectives were to answer the following questions: (1) What, if any, aspects of the School Improvement Grant (SIG) program pose challenges to successful implementation? (2) How do U.S. Department of Education (Education) and state guidance and procedures for screening potential contractors and reviewing contractor performance compare with leading practices? (3) To what extent are Education’s oversight and technical assistance activities effectively supporting SIG implementation? To meet these objectives, we used a variety of methods, including document reviews of Education and state documents; a web-based survey of the 50 states and the District of Columbia; interviews with Education officials and stakeholders; site visits to and teleconferences with 8 states; and a review of the relevant federal laws, regulations, and guidance. The survey we used was reviewed by Education and several external reviewers, and we incorporated their comments as appropriate. We conducted this performance audit from January 2011 through April 2012 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. To identify aspects of the SIG program that pose challenges for successful SIG implementation, we analyzed responses to our survey of state educational agency officials with responsibility for SIG in the 50 states and the District of Columbia. The wed-based survey was in the field from August to October 2011. In the survey, we asked states to provide information on challenges they faced in implementing the SIG program and on other aspects of the program, such as SIG grant renewal. We received responses from all 50 states and the District of Columbia, for a 100 percent response rate. We reviewed state responses and followed up by telephone and e-mail with select states for additional clarification and context. Nonsampling error could affect data quality. Nonsampling error includes variations in how respondents interpret questions, respondents’ willingness to offer accurate responses, and data collection and processing errors. We included steps in developing the survey, and collecting and analyzing survey data to minimize such nonsampling error. In developing the web survey, we pretested draft versions of the instrument with state officials in various states to check the clarity of the questions and the flow and layout of the survey. Education officials also reviewed the draft survey and provided comments. On the basis of the pretests and reviews, we made minor revisions of the survey. Using a web-based survey also helped remove error in our data collection effort. By allowing state SIG directors to enter their responses directly into an electronic instrument, this method automatically created a record for each SIG director in a data file and eliminated the errors (and costs) associated with a manual data entry process. In addition, the program used to analyze the survey data was independently verified to ensure the accuracy of this work. Detailed survey results are available at GAO-12-370SP. We also conducted site visits to and teleconferences with eight states— California, Delaware, Nebraska, Nevada, Ohio, Rhode Island, Texas, and Virginia—that reflect a range of population size, number of SIG schools, and use of the four SIG intervention models. In each state, we interviewed state officials, as well as district or school officials from one to three districts that had Tier I or Tier II SIG schools. Districts were selected in consultation with state officials to cover heavily and sparsely populated areas, and a variety of SIG intervention models. We also reviewed documents, such as state and district applications for SIG funds, and the relevant federal laws, regulations and guidance. We interviewed Education officials and stakeholders, such as teachers’ union officials from the national and local levels. To gather information about state policies and procedures for selecting and overseeing contractors, we analyzed state survey results. Our survey questions included whether states had developed a list of approved contractors, the SIG turnaround models for which they required that districts work with contractors, and whether states reviewed contractor performance. We reviewed Education documents, including SIG guidance, the state application template, and monitoring protocols, and interviewed Education officials responsible for reviewing state applications and providing oversight of states. Further, we reviewed state and district SIG applications from the eight states to identify their selection and review processes for contractors, and proposed contract expenditures. We also spoke with state and local officials about their procedures for selecting and overseeing contractors, as well as with several contractors working with districts we visited. We compared Education and state requirements for selecting and overseeing SIG contractors to leading contracting practices that were identified through collaboration with our contracting experts and review of GAO-09-374GAO-05-274 . To address the extent of Education’s support and oversight of SIG implementation, we reviewed Education guidance, summaries of Education assistance, monitoring time frames, monitoring protocols, and monitoring reports from SY 2010-2011. In addition, we analyzed survey results. We asked states to provide information on the federal role in SIG, including their perspectives on technical assistance offered by Education and Education’s monitoring process. We also talked with officials from Comprehensive Centers and Regional Educational Laboratories serving several of the eight states we worked with. The technical assistance providers were selected to include those working with large, medium, and small rural states. In addition, we interviewed Education officials in charge of the Comprehensive Centers Program and in charge of SIG monitoring efforts. In addition to the contact named above, the following staff members made important contributions to this report: Elizabeth Sirois, Assistant Director; Scott Spicer, Analyst-in-Charge; Jacques Arsenault; Melissa King; Salvatore Sorbello; and Barbara Steel-Lowney. In addition, Jean McSween, James Rebbe, Tom James, William Woods, and Kathleen Van Gelder provided guidance on the study.
The School Improvement Grant (SIG) program funds reforms in low performing schools. Congress provided $3.5 billion for SIG in fiscal year 2009, and a total of about $1.6 billion was appropriated in fiscal years 2010-2012. SIG requirements changed significantly in 2010. Many schools receiving SIG funds must now use the funding for specific interventions, such as turning over certain school operations to an outside organization (contractor). GAO examined (1) what, if any, aspects of SIG pose challenges for successful implementation; (2) how Education and state guidance and procedures for screening potential contractors and reviewing contractor performance compare with leading practices; and (3) to what extent Education’s technical assistance and oversight activities are effectively supporting SIG implementation. GAO surveyed SIG directors in all 50 states and the District of Columbia; analyzed Education and state documents; and interviewed officials from 8 states and school districts in those states, SIG contractors, and education experts. Successful SIG implementation posed a number of challenges. Specifically, state and district officials were challenged to build staff capacity and commitment for reform, facing difficulties such as recruiting and retaining strong staff members. In addition, the SIG requirements to develop teacher evaluations and increase student learning time were difficult to implement quickly and effectively because they required extensive planning and coordination. Furthermore, states sometimes had limited evidence about the performance of SIG schools when making grant renewal decisions. For example, although Education’s guidance identifies meeting annual student achievement goals as a key criterion for making renewal decisions, some states did not receive student achievement data by the time decisions had to be made. States also made decisions through qualitative assessments of schools’ implementation efforts, but such determinations were not always based on extensive interaction with schools or systematic monitoring. Education did not provide written guidance to states about making evidence-based grant renewal decisions after they encountered these challenges. Districts used a significant portion of their SIG funds to hire contractors for a range of services, such as managing school operations and conducting teacher professional development. Leading practices show that screening potential contractors and then reviewing their performance are important for ensuring accountability and quality of results. Education required screening of contractors before contract awards were made. However, Education did not require review of contractors during contract performance, and states varied in whether they ensured that contractors were reviewed during the course of contract performance. Education’s assistance and oversight activities are generally supporting SIG implementation. In our survey, nearly all states reported they were satisfied with Education’s technical assistance, particularly the agency’s SIG guidance and conferences. In addition, many states reported that Education’s guidance was timely. With respect to oversight, Education monitored 12 states in school year (SY) 2010-2011 and found deficiencies in 11 of the 12 states. Education is working with states to correct these deficiencies. For SY 2011-2012, the agency plans to use a risk-based approach to conduct on-site monitoring in 14 additional states. To maximize its oversight resources, Education also plans to conduct some limited monitoring in five additional states in SY 2011-2012. Education officials told us that they plan to monitor the remaining states in SY 2012-2013 and that these states represent a small percentage of SIG funds. GAO recommends that Education (1) provide additional support to states about making evidence-based grant renewal decisions and (2) ensure that contractor performance is reviewed. Education generally supported our first recommendation but disagreed with the second. We modified our recommendation to address some of Education’s concerns.
The Coast Guard is in the process of receiving 14 C-27Js as a part of a Congressionally mandated transfer, at no cost to the Coast Guard, from the Air Force, and these aircraft are planned to significantly contribute to the Coast Guard’s missions once they are operational. However, as we reported in March 2015, it will take time and money to fully transfer and modify the aircraft. As of May 2015, 2 of the 14 C-27J aircraft had been removed from storage at the Air Force’s 309th Aerospace Maintenance and Regeneration Group (AMARG) at Davis-Monthan Air Force Base where 13 of the 14 C-27Js are stored. These 2 aircraft are currently at the Coast Guard’s aviation maintenance facility in Elizabeth City, North Carolina where the aircraft are continuing to be inducted into the Coast Guard. The Coast Guard expects to deliver 2 additional C-27Js from AMARG to its maintenance facility by the end of fiscal year 2015. The first part of induction entails removing the aircraft from the AMARG storage facility, which involves taking off a protective compound, conducting system checks and basic maintenance, and successfully completing a flight test—among other steps. The Coast Guard then needs to ensure that it can support these assets and modify the C-27Js to meet its missions. This is a lengthy and complex process and, as a result, the fleet of 14 fully operational C-27Js is not anticipated until 2022. In our March 2015 report, we identified a number of milestones and risks that will need to be addressed to achieve fully capable aircraft. In general, the Coast Guard must achieve three major milestones before the aircraft are fully operational: 1. induct the aircraft, 2. establish operational units (bases), and 3. add surveillance and advanced communication capabilities. In addition, complicating these efforts are areas of risk that need to be addressed before the Coast Guard can field fully operational C-27Js. These three risk areas are: (1) purchasing spare parts, (2) accessing technical data, and (3) understanding the condition of the aircraft. These and other risks may inhibit the Coast Guard’s ability to operate the aircraft as planned. However, the Coast Guard is working to mitigate these risks. Figure 1 illustrates the milestones and risk areas the Coast Guard must address before it can field a fully capable C-27J aircraft. According to initial Coast Guard estimates, while the C-27J aircraft come at no acquisition cost to the Coast Guard, the costs to fully operationalize them will total about $600 million. The fiscal year 2016 Capital Investment Plan includes $482 million for this effort. The Capital Investment Plan also notes that the Coast Guard has yet to fully estimate the total cost of incorporating and operating the C-27J. The Coast Guard is planning to refine this initial estimate by January 2016, in accordance with a February 2015 DHS acquisition decision memo. In addition to the challenges in converting the C-27Js to fully operational aircraft, we found in March 2015 that the Coast Guard faces a shortfall in To fully meet its mission needs, the achieving its overall flight hour goal.Coast Guard’s 2005 mission needs statement set forth a goal of 52,400 hours per year. In fiscal year 2014, the Coast Guard’s fixed-wing aviation fleet flew 38 percent fewer hours than these stated needs—a total of 32,543 hours. The revised fleet as currently envisioned, with the addition of the C-27J, will narrow this gap, but the Coast Guard will still fall short of the 52,400 flight hour goal. As a result of planned changes to its fleet composition to accommodate the C-27J—specifically reducing its planned purchase of 36 HC-144s to 18—and other reasons the Coast Guard is now on a path to fall short of meeting this goal by 18 percent when all planned assets are operational. Table 1 shows: (1) the aircraft that comprise the current 2014 fleet plan and the Coast Guard’s planned fleet once the C-27Js are operational, (2) the annual flight hours each fleet provides, and (3) the difference between the flight hours of the fleets and the 52,400 hour goal. According to the fiscal year 2016 Capital Investment Plan, the Coast Guard is currently conducting a revised fixed-wing fleet analysis, intended to be a fundamental reassessment of the capabilities and mix of fixed- wing assets needed to fulfill its missions. Coast Guard budget and programming officials recognize the aviation fleet may change based on the flight hour goals in the new mission needs statement and the overall fleet mix analysis. The fiscal year 2016 Capital Investment Plan, therefore, does not include any additional fixed-wing asset purchases. For example, DHS and the Coast Guard have formally paused the HC-144 acquisition program at 18 aircraft, which are the aircraft they have already purchased. The Coast Guard has begun to rewrite its mission needs statement and concept of operations and plans to complete this effort by 2016. The Coast Guard plans to complete its full fixed-wing fleet mix analysis, which includes the assets it estimates will best meet these needs, by 2019, but has not set forth specific timeframes for completing key milestones. We recommended in our March 2015 report that the Secretary of Homeland Security and the Commandant of the Coast Guard inform Congress of the time frames and key milestones for completing the fleet mix study, including the specific date when the Coast Guard will publish its revised annual flight hour needs and when it plans to inform Congress of the corresponding changes to the composition of its fixed-wing fleet to meet these needs. DHS concurred with our recommendation but did not provide specific time lines for meeting this recommendation. The bill for the Coast Guard Authorization Act of 2015, introduced in April 2015, requires a revised Coast Guard fixed-wing aircraft fleet mix analysis to be submitted to congressional transportation committees by the end of fiscal year 2015. The Coast Guard continues to field National Security Cutters (NSCs) and Fast Response Cutters (FRCs), which are replacing the legacy 378’-foot high endurance cutters and the 110’-foot patrol boats, respectively. As we reported in April 2015, the Coast Guard is also in the process of working with three potential shipbuilders to design the Offshore Patrol Cutter, but this asset, needed to recapitalize the vast majority of the major cutter fleet, remains years away from being fielded. In the meantime, the Coast Guard’s legacy Medium Endurance Cutters, which the Offshore Patrol Cutter is planned to replace, have begun to reach the end of their service lives creating a potential gap. The Coast Guard has all 8 NSCs on contract or delivered as of May 2015, and, as we reported in April 2015, completed operational test and evaluation in April 2014. All 8 NSCs are planned to be fully operational by 2020 and the Coast Guard is phasing out the legacy 378’-foot high endurance cutters as the NSCs become operational. We are currently conducting a detailed review of the NSC’s recent test event at the request of this subcommittee. We reported in April 2015, however, that during this initial operational testing, the NSC was found to be operationally effective and suitable, but with several major deficiencies. For example, the NSC’s small boat—which is launched from the back of the cutter—is not suited to operate in rough waters (sea state 5) as intended. officials told us they planned to test a new small boat by March 2015. In addition, the Coast Guard deferred testing for several key capabilities on the cutter, such as cybersecurity, the use of unmanned aerial systems, or its ability to handle certain classified information. Coast Guard officials said follow-on operational tests will be conducted between fiscal years 2015 and 2017. While future tests will be key to understanding the NSC’s capabilities, any necessary changes resulting from these tests will have to be retrofit onto all 8 NSCs since they are all either built or under contract. In June 2014, we found that the NSC program had at least $140 million in retrofits and design changes to fund and implement on the NSC fleet. Sea states refer to the height, period, and character of waves on the surface of a large body of water. Sea state 5 represents 8.2- to 13.1-foot waves. As we also reported in June 2014, further changes may be needed due to issues discovered through operating the NSC, which could result in the Coast Guard having to spend even more money in the future to ensure the NSC fleet meets requirements and is logistically supportable. For example, the cutter is experiencing problems operating in warm climates, including cooling system failures, excessive condensation forming puddles on the deck of the ship, and limited redundancy in its air conditioning system affecting use of information technology systems. According to operational reports from a 2013 deployment, the Commanding Officer of an NSC had to impose speed restrictions on the vessel because of engine overheating when the seawater temperature was greater than 68 degrees. In addition, cold climate issues on the cutter include a lack of heaters to keep oil and other fluids warm during operations in cold climates, such as the arctic. Further, Coast Guard operators state that operating near ice must be done with extreme caution since the ice can move quickly and the NSC could sustain significant damage if it comes in contact with the ice. In June 2014 we reported that while senior Coast Guard officials acknowledged that there were issues to address, they stated that the Coast Guard has not yet determined what, if any, fixes are necessary and that it depends on where the cutter ultimately operates. In April 2015, the Coast Guard accepted delivery of the 13th of 58 FRCs and now has 32 of the cutters on contract. As we reported in April 2015, the Coast Guard is introducing additional competition into this purchase by recompeting the construction contract for the remaining 26 vessels; this contract is planned to be awarded in fiscal year 2016. According to the Coast Guard, the FRC has already been used to rescue over 400 undocumented immigrants, seize nearly $20 million in contraband, and apprehend several suspected drug smugglers. The fiscal year 2016 Capital Investment Plan includes $1.47 billion over the next 5 years to continue purchasing these assets by which time the Coast Guard plans to have fielded 42 FRCs. As we reported in June 2014, operational testers within the Department of the Navy determined in July 2013 that the FRC, without the cutter’s small boat, is operationally effective—meaning that testers determined that the asset enables mission success. However, these operational testers also determined that the FRC is not operationally suitable because a key engine part failed, which lowered the amount of time the ship was available for missions to an unacceptable level. Despite the mixed test results, Navy and DHS testers as well as Coast Guard program officials all agreed that the FRC is a capable vessel, and the Coast Guard plans to confirm that it has resolved these issues during follow-on testing planned to be completed by the end of fiscal year 2015. The Coast Guard is using a two-phased, competitive strategy to select a contactor to construct the Offshore Patrol Cutter (OPC), as we reported in April 2015. First, the Coast Guard conducted a full and open competition to select three contractors to perform preliminary and contract design work, and in February 2014, the Coast Guard awarded firm-fixed price contracts to three shipbuilders. Second, by the end of fiscal year 2016, the Coast Guard plans to award a contract to one of these shipbuilders to complete the detailed design of the vessel and construct the first 9 to 11 ships, at which time the Coast Guard plans to recompete the contract for the remaining vessels. The Coast Guard currently plans to begin construction on the lead ship in fiscal year 2018—one year later than planned in its most recent program baseline—and deliver this ship in 2022. The Coast Guard attributes the schedule delay to procurement delays, including a bid protest. The fiscal year 2016 Capital Investment Plan has $1.5 billion in funding for the OPC, which funds the design work and construction of the first three vessels. After the first 3 of the planned fleet of 25 OPCs are built, the Coast Guard plans to increase its purchase to 2 OPCs per year until the final asset is delivered, currently scheduled for fiscal year 2035. As we reported in July 2012, the Coast Guard faces capability gaps in its surface fleet over the next several years as the projected service life of its Medium Endurance Cutter fleet expires before planned delivery of the OPCs, which will replace these aging cutters. The Coast Guard completed a refurbishment of the Medium Endurance Cutters in September 2014 to increase their reliability and reduce longer-term maintenance costs. Senior Coast Guard officials responsible for this project reported that these efforts may provide up to 15 years of additional service life to the fleet. However, they noted that this estimate is optimistic and that the refurbishment provided needed upgrades to the Medium Endurance Cutters, but was not designed to further extend the cutters’ service lives. As depicted in figure 2, even with the most optimistic projection for the current service life of the Medium Endurance Cutters, we estimated in our July 2012 report that there was a gap before the planned OPC deliveries. The figure shows the service lives for each of the 27 210’-foot and 270’- foot Medium Endurance Cutters if the service life extensions provide 5, 10, or 15 years of additional service, and the planned delivery of the 25 OPCs. Coast Guard budget officials recently told us that the Coast Guard is studying whether to perform additional service life extension work on the Medium Endurance Cutters to keep them operational until the OPCs are delivered. Coast Guard officials could not tell us when a decision will be made about this work and the fiscal year 2016 Capital Investment Plan does not include funds for this effort. As we have found in recent years, the Coast Guard faces a significant challenge in the affordability of its overall fleet, driven primarily by the upcoming OPC procurement, which is planned to cost $12.1 billion. The OPC will absorb about two-thirds of the Coast Guard’s acquisition funding between 2018 and 2032 while it is being built. As a result, remaining Coast Guard acquisition programs will have to compete for a small percentage of funding during this time. We found in June 2014 that there are gaps between what the Coast Guard estimates it needs to carry out its program of record for its major acquisitions and what it has traditionally requested and received. For example, senior Coast Guard officials have stated a need for over $2 billion per year, but the Coast Guard has received $1.5 billion or less over the past 5 years. The President’s budget requests $1 billion for fiscal year 2016. In an effort to address the funding constraints it has faced annually, the Coast Guard has been in a reactive mode, delaying and reducing its capability through the annual budget process but without a plan to realistically set forth affordable priorities. The Coast Guard, DHS, and Office of Management and Budget officials have acknowledged that the Coast Guard cannot afford to recapitalize and modernize its assets in accordance with the current plan at current funding levels. Efforts are underway to address this issue, but so far, these efforts have not led to the difficult trade-off decisions needed to improve the affordability of the Coast Guard’s portfolio. We recommended in 2014 that the Coast Guard develop a 20-year fleet modernization plan that identifies all acquisitions needed to maintain the current level of service—aviation and surface— and the fiscal resources needed to buy the identified assets. We recommended that the plan should consider trade-offs if the fiscal resources needed to execute the plan are not consistent with annual budgets. The Coast Guard concurred with our recommendation, but its response did not fully address our concerns or set forth an estimated date for completion. In June 2014, we also reported that the Coast Guard faces a potentially expensive recapitalization of other surface assets, such as the polar icebreakers and its fleet of river buoy tenders, as these assets continue to age beyond their expected service lives and, in some cases, have been removed from service without a replacement. These issues pose additional potential challenges to the affordability of the Coast Guard’s overall acquisition portfolio. Icebreakers—According to program officials, due to funding constraints, the Coast Guard chose not to invest in either of its heavy icebreakers as they approached the end of their service lives. Thus, both heavy icebreakers were out of service from 2010 to 2013 and the Coast Guard could not complete missions, such as resupplying a science laboratory in Antarctica. The Coast Guard has recently returned one of these heavy icebreakers back to service, but still has one fewer heavy icebreaker than it has historically operated and several fewer than it needs, according to the Coast Guard’s June 2013 heavy icebreaker mission need statement. The fiscal year 2016 President’s Budget asks for $4 million for continued preparatory studies to develop a cost estimate, among other things. The associated fiscal year 2016 Capital Investment Plan contains $166 million for polar icebreakers over the next five years but does not identify what this money is for, though it is far short of the estimated $831 million needed to build the vessel. The Coast Guard is currently working with several U.S. government agencies to develop requirements and establish a plan to build a heavy icebreaker that could be jointly funded by the U.S. government agencies that need the asset to accomplish its missions. River Buoy Tenders—The Coast Guard is facing a gap in its river buoy tender fleet and has yet to formalize an acquisition project to replace this fleet—a project estimated to cost over $1.5 billion. HH-60 and HH-65 Helicopter Fleets—The HH-60 and HH-65 helicopter fleets will approach the end of their lifespans between 2022 and 2026 and will need to either be replaced or have a service life extension performed to keep them operational. Regardless of the future path, significant acquisition dollars will be required to maintain annual flight hours for the next 20 years, according to Coast Guard program officials. Chairman Hunter, Ranking Member Garamendi, and Members of the Subcommittee, this concludes my prepared statement. I would be pleased to respond to any questions. If you or your staff have any questions about this statement, please contact Michele Mackin at (202) 512-4841 or mackinm@gao.gov. In addition, contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals who made key contributions to this testimony include Katherine Trimble, Assistant Director; Laurier R. Fish; John Crawford; and Peter W. Anderson. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The Coast Guard is managing a multi-billion dollar effort to modernize aging assets, including ships, aircraft, and information technology, to provide new capabilities to conduct missions ranging from marine safety to defense readiness. The Coast Guard has made progress in its acquisition management capabilities, such as more closely following acquisition best practices and taking steps to increase competition. However, GAO has consistently found that DHS and the Coast Guard recognize, but have yet to address, the fact that the Coast Guard's acquisition needs are not affordable. This statement is based on GAO's body of work issued during the past three years on Coast Guard major acquisitions and highlights GAO's recently completed review of the transfer to the Coast Guard of the C-27J aircraft as well as observations regarding the Coast Guard's fiscal year 2016 Capital Investment Plan. The statement addresses the status of the Coast Guard's (1) aviation assets, particularly the C-27J aircraft and (2) surface assets, as well as (3) the overall affordability of its major acquisition portfolio. GAO has made a number of recommendations to improve acquisition management and assess the affordability of the Coast Guard's portfolio. DHS and the Coast Guard agreed with GAO's recommendations and are working on implementing them by revisiting the Coast Guard's mission needs and fleet mix, as well as creating a 20-year acquisition plan that balances needs and resources, though the agencies have not specified when they will finish these efforts. GAO reported in March 2015 that the Coast Guard is in the process of receiving 14 C-27J fixed-wing aircraft transferred from the Air Force at no cost to the Coast Guard. However, it will take 7 years and about $600 million to fully transfer and modify the aircraft by adding information technology and surveillance systems. Transfer of the C-27J faces a number of risks but the aircraft is expected to contribute significant flight hours toward the Coast Guard's goal once complete. In light of this transfer, the Coast Guard is in the process of determining the best mix of fixed-wing aircraft to provide the capabilities it needs to carry out its missions. As shown in the table, GAO reported that the Coast Guard has fallen short of its flight hour goal; this trend is expected to continue until the Coast Guard revises its mission needs, an effort it expects to complete in 2016. The Coast Guard also plans to complete a fixed-wing fleet mix analysis by 2019, which will revisit the current flight hour goal and the assets that will best meet its needs. The table reflects the existing fleet and flight hours as compared to GAO's analysis of the Coast Guard's planned fleet including the C-27J aircraft. Note: The HC-144 and C-27J are medium range assets while the HC-130H and HC-130J are long range assets. The fiscal year 2014 ‘medium range' column includes 4 legacy medium range aircraft. According to GAO's April 2015 review, the Coast Guard continues to field National Security Cutters and Fast Response Cutters. The Coast Guard is also working with three potential shipbuilders to design the Offshore Patrol Cutter, needed to recapitalize the majority of the major cutter fleet, with plans for the first ship to be fielded in 2022. In the meantime, the Coast Guard's legacy Medium Endurance Cutters, which the Offshore Patrol Cutter is planned to replace, have begun to reach the end of their service lives. The Coast Guard currently has no definitive plan to extend the service life of these legacy assets and as a result faces a potentially significant capability gap. GAO found in June 2014 that budget officials have acknowledged that the Coast Guard's current plan for developing new, more capable assets is not affordable given current and expected funding levels. For the past 5 years, GAO has found that the Coast Guard's acquisition funding has fallen short of what it estimates it needs to fully recapitalize its assets. The Coast Guard has responded by annually delaying or reducing its capability. The Coast Guard and the Department of Homeland Security (DHS) have taken some steps to address these affordability issues, but as yet these efforts have not led to the types of significant trade-off decisions among resources and needs that would improve the long-term outlook of the Coast Guard's acquisition portfolio.
Under the CAA, EPA establishes health-based air quality standards that the states must meet and regulates air pollutant emissions from various sources, including industrial facilities and mobile sources such as automobiles. EPA has issued standards for six primary pollutants—carbon monoxide, lead, nitrogen oxides, ozone, particulate matter, and sulfur dioxide—that have been linked to a variety of health problems. For example, ozone can inflame lung tissue and increase susceptibility to bronchitis and pneumonia. In addition, nitrogen oxides and sulfur dioxide contribute to the formation of fine particles that have been linked to aggravated asthma, chronic bronchitis, and premature death. About 133 million Americans already live in areas with air pollution levels above health-based air quality standards, according to EPA. The NSR program, established in 1977, is intended to ensure as new industrial facilities are built and existing ones expand that public health is protected, that the air quality in national parks and wilderness areas is maintained, and that economic growth will occur in a manner consistent with the preservation of existing clean air resources. The NSR program comprises (1) the Prevention of Significant Deterioration (PSD) program, which generally applies to pollutants in areas that meet federal air quality standards for those pollutants or for which the attainment status is unclassified, and (2) the Nonattainment NSR program, which generally applies to pollutants in areas that are not meeting the standards for those pollutants, although the term NSR usually refers to both. The federal NSR program is primarily administered by state and local air quality agencies, with oversight by EPA. If a company plans a change to its facility and determines that it will trigger federal NSR regulations, the company must then prepare and file a permit application with the relevant state or local agency. Figure 1 illustrates this permitting process. The state or local permitting agency determines if the application is complete; develops a draft permit, if justified; notifies EPA and the public of the application; and solicits comments on the draft permit. The permitting agency then responds to comments and issues a final permit, if merited, which can be administratively or judicially appealed. The permitting agency must provide EPA with a copy of every permit application and draft permit; address EPA’s comments, if any; and notify EPA of the final action taken. In addition, the records and reports the state or local agency collects as it monitors compliance with the permit and NSR program generally must be available for public review. Even when federal NSR requirements do not apply to a facility change, the project may still be subject to other federal, state, and local air pollution control requirements. For example, under Title V of the CAA, a company must obtain a facility operating permit that consolidates all of the company’s federal obligations for controlling air pollution and complying with the act. These obligations can include meeting the requirements and standards of states’ and localities’ federally approved plans for improving air quality; other federal requirements to control pollution, such as those controlling hazardous air pollutants not also covered under NSR; and requirements included in any federal, state, or local NSR permits issued to the facility. EPA has now given most state and local agencies approval to implement the Title V operating permit programs that, among other things, provide for public participation in the Title V permitting process. These operating permits are issued and then renewed every 5 years and can be updated at any time. During the mid-1990s, EPA began evaluating NSR compliance for entire industry sectors that produced significant amounts of air pollution. The agency focused its inspections on industry sectors it suspected of potential NSR violations. In particular, EPA looked at industries with a decreasing number of facilities but static or increased production, industries with many years of operation and high emissions but with no record of NSR permits, and industries with new plants being constructed with no NSR permits. EPA’s data suggested that facilities in some sectors might have been making major modifications to increase production or extend the life of the facilities’ equipment—and therefore increasing emissions—without obtaining NSR permits or installing pollution controls. As a result, EPA targeted its NSR investigations on coal-fired power plants, petroleum refineries, steel minimills, chemical manufacturers, wood products companies, and the pulp and paper industry. In 1996, EPA began its investigation of the coal-fired utility industry. Subsequently, EPA referred to DOJ a number of alleged violations of the NSR provisions. Generally, the referrals indicated EPA’s conclusion that the owners and operators of some of the largest coal-fired power plants in the country had violated the NSR provisions by making physical changes to their facilities, without obtaining a permit, that increased emissions and that the agency did not consider to be routine in nature. The companies, however, believed the changes did not violate the NSR program for a number of reasons, including that the projects were exempt under the routine maintenance exclusion. After reviewing these referrals, DOJ in November 1999 filed seven enforcement actions in U.S. district courts. That same month, EPA issued an administrative compliance order to the Tennessee Valley Authority alleging multiple NSR violations at its coal-fired power plants. Since these actions were taken, DOJ has filed an additional six enforcement actions against coal-fired utilities. As of October 2003, 7 of the 14 cases have been settled or decided. Table 1 provides a summary of the seven ongoing enforcement cases and the status of each. Over the years since its inception, various aspects of the NSR program have been subject to litigation that resulted in court decisions affecting the program. For example, in 1990, the Seventh U.S. Circuit Court of Appeals issued a decision in Wisconsin Electric Power Co. v. Reilly. EPA argued in the case that when Wisconsin Electric Power Company (WEPCO) was estimating whether a physical change would increase emissions enough to trigger NSR, the company should have assumed it would operate the modified equipment at the maximum level possible, even though WEPCO had never operated at that level. The court ruled that this requirement was inappropriate. EPA then issued a rule for electric steam-generating utilities only that allowed them to estimate their projected annual emissions after the change based on their actual emissions history for purposes of preconstruction permitting, but they would have to report their actual emissions for 5 years after making the change. More recently, in January 2001, the President established a task force—the National Energy Policy Development Group (NEPDG)—chaired by the Vice President to develop a national energy policy. In its May 2001 National Energy Policy Report, the group recommended to the President that EPA and the Department of Energy investigate the impact of the NSR program on investments in new utility and refinery generation capacity, on energy efficiency, and on environmental protection. The group also recommended that the Attorney General review the existing NSR enforcement actions to ensure they were consistent with the CAA and its implementing regulations. In response to the group’s recommendations, DOJ issued a report in January 2002 that concluded EPA had a reasonable basis for bringing those actions against coal-fired utilities. In June 2002, also in response to the group’s recommendations, EPA issued a report to the President and concurrently issued a set of recommendations for revising the NSR program. EPA issued a final rule in December 2002 that contained five provisions based on its June 2002 recommendations, outlined in table 2 below. Subsequently, in response to a number of requests, EPA agreed to reconsider certain aspects of the final rule, took public comment on those features during July and August 2003, and is assessing the comments to determine if the agency needs to make any changes. Also in December 2002, EPA issued for public comment a proposed rule that would change the method for determining whether a facility change can be exempt from federal NSR requirements because it is routine maintenance, repair, or replacement. EPA intended for the final version of the proposed rule to supplement its case-by-case determination of what facility changes qualify for the routine maintenance exclusion, using factors such as the nature, extent, cost, frequency, and purpose of the change. EPA proposed to determine a facility’s total replacement costs and calculate a certain percentage of those costs that the agency would allow the company to spend on routine maintenance and repair without triggering NSR. EPA proposed several alternative cost thresholds for routine maintenance and repair below which modifications could be considered exempt and solicited comments on the thresholds. EPA also included for comment a provision that would generally allow a facility to consider the replacement of existing equipment with identical or functionally equivalent new equipment as routine replacement, depending on the amount of costs involved. The agency announced a final rule in August 2003, specifying the cost threshold industry could use to replace equipment and exempt it from NSR. This rule will finalize one aspect of the December 2002 proposed rule and, at this time, the agency is not taking action to finalize any other aspects of this proposed rule. The NSR revisions have recently been the subject of recent congressional debates. In 2002, Congress held hearings during which members of Congress, EPA and DOJ officials, and a number of stakeholders— including representatives of industry, states, and environmental groups— presented their positions on the NSR program revisions. For example, during a July 16, 2002, hearing before the Senate Committee on Environment and Public Works, some state attorneys general and environmental group officials testified that the revisions could seriously undercut the ongoing enforcement cases, jeopardizing the millions of tons in pollution reductions that those cases could yield. At the same hearing, EPA and industry officials generally testified that the revisions would allow companies to modify their facilities so that they are more energy efficient and, as a result, would emit less pollution. In addition, during a September 3, 2002, hearing before the Subcommittee on Public Health, Senate Committee on Health Education, Labor, and Pensions, former EPA Administrator Carol Browner testified that, among other things, she was concerned that the revisions would “eliminate the very features of the current law that provide transparency to the public—monitoring, record keeping, and reporting.” EPA enforcement officials assessed the potential impact of the NSR revisions (before issuing them as final and proposed rules in December 2002) on the enforcement cases against coal-fired utilities and determined that some of the revisions could have an impact. These EPA officials discussed their views on the potential impact with DOJ. In part as a result of the assessments, for the revisions that were included in the final rule, EPA adjusted the content and wording of the language before issuing the rule so that they were not expected to affect the cases. For the proposed rule, the EPA enforcement staff had concerns that if EPA specifically defined what facility changes would qualify for the routine maintenance exclusion, the cases could be affected since they involved disagreements about how EPA had been applying the routine maintenance exclusion in the past. Consequently, EPA decided not to specifically define what activities qualify as routine maintenance but to propose several options for calculating cost thresholds below which modifications could be considered exempt and solicited public comment on the options. Nevertheless, during the 1½ years that the final language of the revisions was being debated, some EPA enforcement officials and key stakeholders believe that some companies were discouraged from settling their cases because of the possibility that EPA could revise the definition of the exclusion in a way that would be favorable to industry—although some companies did settle after the proposed rule was issued. Furthermore, some EPA enforcement officials and key stakeholders believe that the announcement of the August 2003 final rule, in which EPA set a specific cost threshold for routine replacement activities, could also delay settlement of some of the cases and could affect judges’ decisions in the cases about what remedies to apply to companies that are found to be in violation of the old NSR rule. EPA enforcement officials assessed the potential impact of the draft NSR revisions that were issued as a final rule in December 2002 on the enforcement cases and discussed their views about the impact with DOJ. According to current and former EPA enforcement officials, after EPA internally debated and agreed upon the language of the revisions, they were not expected to adversely affect the ongoing enforcement cases against coal-fired utilities. According to these EPA officials, in 2001 and 2002, several briefings and less formal discussions occurred during which the enforcement staff raised concerns about the revisions’ potential adverse impact on the cases. Officials involved in at least one, and in some cases several, of these meetings included the EPA Administrator, the Deputy Administrator, the Assistant Administrator for Air and Radiation, the former Principal Deputy Assistant Administrator for Enforcement and Compliance Assurance, and the Director of the Air Enforcement Division. DOJ’s Deputy Assistant Attorney General for Environment and Natural Resources and other DOJ enforcement staff also discussed the potential impact of the proposed revisions on the cases with EPA’s Assistant Administrator for Air and Radiation and staff in EPA’s offices of the General Counsel and Enforcement and Compliance Assurance. According to EPA enforcement officials, they prepared analyses—some of which were documented in briefing papers, charts, and graphs—that were discussed internally. EPA enforcement officials said that because their main objective in raising concerns about the revisions was to maintain the cases, they urged senior agency officials to tailor the language of the revisions to address their concerns before issuing the final rule. The enforcement staff felt this would help ensure that the language finally adopted would minimize any impact on the cases. More specifically, according to the Director of EPA’s Air Enforcement Division, the staff prepared analyses indicating that three of the revisions in the rule would have no impact on the enforcement cases. These three revisions involve the exemptions for clean units, pollution control projects, and the option of setting a plantwide limit on emissions. In addition, because of the 1990 WEPCO decision, utilities already had the authority, before EPA issued the final rule, to use the revised method for estimating emission changes resulting from a facility change. Therefore, since this provision in the rule was not a significant change for the utility industry, the EPA staff did not expect this provision to affect the cases. However, the EPA enforcement officials were concerned about the provision establishing a revised method for calculating past, or baseline, emissions. Specifically, EPA considered changing the time period used to calculate baseline emissions for utilities. According to the Director of EPA’s Air Enforcement Division, the enforcement staff prepared an analysis comparing the effects of using different time periods on the viability of each case. In part as a result of this analysis, the baseline calculation for utilities was not changed in the final rule. During the same briefings held in 2001 and 2002, the EPA enforcement staff expressed concern that more explicitly defining what facility changes qualify for the routine maintenance exclusion, as anticipated in the December 2002 proposed rule, had the most potential to negatively affect the cases. They were concerned because the enforcement cases generally involve disagreements between EPA and the utilities on whether past facility changes made without an NSR permit qualified for the routine maintenance exclusion. In general, EPA enforcement officials were concerned that if the agency specifically proposed a definition of routine maintenance that was different from the way the agency had applied the exclusion in the past, defendants could delay the cases by arguing that some of the facility changes under dispute in the lawsuits might be able to qualify for an exemption from NSR. For example, the EPA officials were considering setting a cost threshold for an allowance for annual maintenance, repair, and replacement below which a company would not have to obtain an NSR permit. EPA enforcement officials believed that if a threshold were proposed that was higher than the costs incurred for the facility changes at issue in the cases, the cases could be adversely impacted. Specifically, the officials were concerned that judges might not order companies to install pollution controls even if they were found to be in violation of the prior NSR rule, since the facility changes in question would now be legal under the proposed rule (if adopted as proposed). The EPA enforcement staff compared the potential impact of various cost thresholds on the viability of each case. Based in part on these comparisons, EPA decided not to specifically set cost thresholds for individual industries in its December 2002 proposed rule, but rather to solicit comments on what thresholds to use. The EPA enforcement staff had similar concerns about the other revision under consideration for the December 2002 proposed rule. It would allow companies to consider the replacement of existing equipment with identically or functionally equivalent new equipment as “routine maintenance, repair, and replacement” and be exempt from federal NSR regulations. The cost of the equipment had to be below a certain percentage of the cost to replace a process unit. A process unit for power plants is defined as an electric utility steam-generating unit (power plants can have more than one of these). The replacement equipment also had to meet certain criteria, such as maintaining the basic design parameters of the original unit. EPA enforcement officials were concerned that, depending on where the threshold was set, this revision could also affect the cases. As with the first provision, the EPA enforcement staff compared the potential impact of various replacement cost thresholds (up to 50 percent) on the viability of each case in dispute at the time and concluded that 95 percent to 98 percent of the facility changes at issue in the utility enforcement cases would be considered routine maintenance—and thus exempt from NSR—if the new rule were applied and the threshold were set at more than about 1 percent or 2 percent of the process unit’s costs. Again, EPA decided not to specify a threshold in the December 2002 proposal but instead to solicit comments on the overall approach. EPA reviewed the comments submitted on both proposed revisions and, even though seven of the enforcement cases had not yet been settled or decided by the courts, announced a final rule in August 2003 specifying a 20 percent threshold for the replacement of existing equipment, provided the replacement does not change the basic design parameters of the process unit and the process continues to meet enforceable emission and operational limitations. To illustrate the impact of this cost threshold, it costs approximately $800 million on average to replace a 1,000-megawatt electric utility steam-generating unit, excluding the costs of pollution controls, according to EPA enforcement officials. Under the new rule, an unlimited number of projects costing on average between $8 million and $160 million each (assuming cost thresholds of between 1 percent and 20 percent) could be excluded from NSR requirements. According to the Director of EPA’s Air Enforcement Division, this could allow companies to make facility changes without an NSR permit that are much more substantial than any of those in dispute in the cases. According to former and current EPA senior enforcement officials, despite the agency’s efforts to minimize the impact of the final and proposed rules on the enforcement cases, they believe the possibility that EPA could revise the routine maintenance exclusion in ways that could improve the companies’ legal positions in the cases had a detrimental effect on the willingness of some companies to settle. The officials stated that EPA normally settles 90 percent to 95 percent of its enforcement cases before they go to trial, but that companies were slower to settle after EPA publicly acknowledged it was considering the revisions. For example, according to a former EPA enforcement official who had been involved in the cases, the attorneys representing some of the companies in the cases asked EPA why they should comply with an interpretation of the law that the administration was trying to change. These concerns were reinforced further when an industry attorney in a state NSR enforcement case suggested that the court delay the case because EPA was still reconsidering its interpretation of the CAA through the NSR revisions. Similarly, the current Director of EPA’s Air Enforcement Division believes the most significant impact on the enforcement cases was that companies delayed settling during the year and a half the agency spent discussing NSR program reforms before issuing the final and proposed rules. According to current and former enforcement officials, companies spent this time lobbying EPA to include language in the revisions that would help them win their cases. Similarly, the National Academy of Public Administration (NAPA) concluded in an April 2003 report on the NSR program, “The possibility that EPA would soon reform the NSR modification provisions favorably to industry may have led to companies’ reluctance to settle their cases.” According to the Director of EPA’s Office of Air Enforcement, in the months immediately following the issuance of the December 2002 final and proposed rules, settlement activity did increase. During this time, EPA and DOJ entered into settlement agreements with four companies that resulted in the annual reduction of approximately 421,000 tons of sulfur dioxide and nitrogen oxide combined. See table 3 for a list of these companies. EPA’s Director of Air Enforcement believes these settlements suggest that the December 2002 final and proposed rules, as issued, did not significantly affect companies’ willingness to settle the cases. In this official’s opinion, the cases were not substantially affected prior to the announcement of the August 2003 final rule because the enforcement staff was successful in negotiating and revising the language and content of the rules. However, this official stressed that to the extent EPA decided to go forward with more explicit exclusions for routine maintenance, repair, and replacement, as it has now done, companies could be less willing to settle their cases. According to the former Director of EPA’s Office of Regulatory Enforcement, if EPA got agreements with companies in the remaining seven pending enforcement cases against coal-fired utilities that are equivalent to the settlements it has achieved in the past, sulfur dioxide emissions could be cut by as much as 2.9 million tons annually and substantial reductions in nitrogen oxide emissions could also be achieved. Some EPA enforcement officials and officials from environmental groups and states have raised concerns that the announced August 2003 rule, and any subsequent rules more explicitly defining what facility changes qualify for the routine maintenance exclusion, could negatively impact the enforcement cases even further. In a September 2003 legal filing in one of the enforcement cases, DOJ stated EPA’s position that the announced August 2003 rule is prospective in nature and does not affect the ongoing enforcement cases, which are based on past conduct. Officials from the New York and New Jersey Attorney General offices have said that the charges against the companies in these cases were brought under the previous NSR program, before any of the recent revisions, and the officials are confident that the judges will make decisions based on whether the companies violated the rules that were in effect at that time. While these officials did not expect the cases to be delayed on the basis of any motions that industry may file in light of the August 2003 rule, they noted that if such motions were filed, the officials would have to spend additional time and resources to defeat them. In addition to these effects, some stakeholders are also concerned that the rule could affect the remedies imposed on companies (including fines companies must pay or actions they must take) if the courts find the companies to be in violation of the old NSR rule. Officials from environmental groups and state attorney general offices expressed concerns that industry attorneys would attempt to argue that since the modifications for which they were found liable under the old rule were now permissible under the new rule, they should not be penalized. If judges were to agree, this could mean that fines may be reduced or companies may not be required to install pollution controls and reduce emissions to the extent that they might have been before the new rule. Indeed, on September 29, 2003, industry attorneys in the Illinois Power case asserted in their closing arguments that the new exclusion for routine maintenance in the August 2003 rule decisively undercut the critical premise of the government’s case because in the new rule, EPA changed the interpretation of the Clean Air Act upon which it had based the enforcement cases. The judge had not issued a ruling in the Illinois Power case at the time GAO completed this report. Several provisions in the December 2002 NSR final rule could limit assurance that the public has input on changes companies make to their facilities, especially those that increase emissions, hampering the public’s ability to monitor health risks and company compliance with NSR. The provisions could also limit assurance that the public has access to documents showing how companies estimated whether the changes would increase emissions enough to trigger NSR. For example, a company can now determine on its own if there is a “reasonable possibility” that a change could trigger NSR, but the rule is unclear about how companies will make this determination and how the public can access information about it. The extent of the rule’s impact depends on the extent to which other federal, state, and local regulations still require that companies obtain a permit and notify the public of modifications, but the scope of these other requirements varies widely. The Plantwide Applicability Limit (PAL) provisions in the December 2002 final rule could impact the amount of data available on, and public input into, facility changes and emissions. On the one hand, a PAL provides new opportunities for the public to have access to facility emissions information because a company must undergo a public notice and comment process before setting a PAL. The company must also monitor and report more detailed and frequent emissions information during the life of the PAL. For example, if a company decides to pursue a PAL, it must apply to the state or local air quality agency, which in turn must notify the public of the draft PAL and give the public at least 30 days to provide comments. The application must list each piece of equipment in the plant that emits the pollutant to be regulated under the PAL, such as a boiler or paint sprayer, and the “baseline” emissions it generates. Also, during the life of the PAL, a company must report semiannually to the state or local agency the monthly emissions of some or all of the NSR “criteria pollutants” from each piece of equipment. In contrast, for a facility without a PAL, in many instances the company would have limited emissions data for the facility. Thus, both the public notice and comment process for obtaining a PAL and the semiannual reporting requirements while subject to the PAL provide the public more specific and more frequent emissions information than would be provided for a facility that does not have a PAL. On the other hand, according to some state and local air quality agencies and environmental groups, because a company can pursue a facility change without an NSR permit under a PAL, as long as total facility emissions do not increase, the public may have fewer opportunities to provide input on a company’s decision to modify a facility, assess the emissions created (including hazardous air pollutants that may not be identified for monitoring under the PAL), and consider ways to control them. For example, if a company without a PAL decided to install a piece of equipment, such as a boiler, that would increase the facility’s emissions to a level that would trigger federal NSR, the company would have to submit an application to the state or local agency describing the change and the anticipated emissions. The agency would have to notify the public and give it 30 days to comment on the draft federal NSR permit, and the company would have to install the best available pollution controls on the equipment when making the facility change. However, under a PAL, the company could make the change without obtaining a federal NSR permit, soliciting public participation, or installing pollution controls, even though the change significantly increases emissions, as long as the company offsets the increase somewhere else within the facility and does not exceed the PAL. Some industry groups have responded that other federal, state, or local regulations will still require reporting and record keeping on facility changes and installation of emission control technology, so public access and input will not change. For example, if state and local air quality agencies require that companies obtain permits for facility changes not subject to federal NSR requirements, the public may still be notified about company plans to make a change and could comment on them. However, several states, as well as the State and Territorial Air Pollution Program Administrators and the Association of Local Air Pollution Control Officials, note that state and local emission control regulations governing such facility changes vary widely. For example, some local air quality agencies in California require a public comment process for many facility changes not subject to the federal NSR program, while Ohio requires that the public be notified of only large or potentially controversial changes. EPA program managers maintain that many past changes were not subject to federal NSR permits for a number of reasons, so public access will not change. For example, prior to the final rule, the managers stated that a company could make an unlimited number of changes to a facility, as long as any one change did not trigger NSR. In addition, if the emissions effects of some changes were too small to trigger NSR, a company could offset emissions increases with other emissions reductions, “netting out” of federal NSR requirements. The program managers also believe that a predominant number of states and localities would still require public notice and comment on these changes. The two provisions of the December 2002 final rule revising the method for calculating past emissions and estimating emissions resulting from a facility change could affect the amount and availability of information available to the public. Companies use these provisions to determine if their changes will trigger federal NSR requirements. To make this determination, a company must estimate the emissions expected after the change and compare this with the actual historic emissions prior to the change, known as the baseline emissions level. Before the rule, a company determined the baseline for a piece of equipment or operating procedure using the average annual emissions generated during the 24-month period prior to the change—or the company could seek to use a different period, more representative of normal operations. Under the new rule, a company will be able to choose any 24-month period in the past 10 years as the baseline. However, the company must adjust the baseline to account for any other pollution control requirements implemented during this time, such as limits on acid rain pollutants, and eliminate any time periods from consideration where facilities exceeded required emissions limits. Also under the new rule, once a company calculates its baseline, it compares the baseline to the expected emissions after the equipment or operations are modified to determine if emissions will increase enough to trigger NSR. Prior to the final rule, when estimating expected emissions, companies other than utilities had to assume that they would operate a piece of equipment at the maximum level possible representing the maximum possible emissions, even if they had not operated at that level in the past and did not plan to do so in the future. Companies have said that this approach was unfair because, among other things, it ignored market fluctuations. EPA revised the method of calculating the expected emissions in the final rule. Now, a company can project the expected activity level after the facility change and estimate the resulting emissions accordingly. Thus, under the rule, some estimates of expected emissions most likely will be smaller than in the past. Various stakeholders involved in the NSR revisions disagree on the impact of these two changes. For example, some expect that companies will choose the time period that gives them the highest baseline, or allowable emissions, thereby giving the companies the greatest flexibility to make changes in response to economic variations without triggering NSR. On the other hand, EPA program managers and a representative of a major industry explain that this is not necessarily true because companies now have to adjust their baselines downward to account for other pollution control requirements. In those cases where companies set higher emissions baselines and estimate smaller emissions increases, the difference between these two numbers will be smaller than in the past and will not trigger the federal NSR program and its requisite permitting, public notice, and public comment requirements. These changes may still trigger state or local requirements to obtain a permit and its associated public participation rules, depending on the state or locality, but, as we have stated, the scope of these requirements varies widely. In addition, several industry representatives claim that the Title V provisions governing record keeping and reporting requirements will ensure the public continues to have emissions data to monitor compliance. But other stakeholders point out that the data are scattered across various programs, making it difficult for the public to determine if facilities made any changes and what impact, if any, this had on emissions. The public eventually may learn of a facility change because under the rule, a company must annually report if the actual emissions generated after certain changes exceeded the company’s estimate. In any event, this reporting is done after the change is in place, and the public can have any input. Also under the NSR program, when a company calculates the expected emissions after a change, if the company determines emissions will clearly exceed the federal NSR threshold, the company must obtain a permit to proceed. If the calculation does not clearly indicate that a proposed facility change triggers NSR, the company does not have to keep any records of this determination. Under the rule, a company can now determine if there is a “reasonable possibility” the change will trigger NSR requirements. If it does, the company must maintain on-site documentation of this decision, as well as emissions records for the modified equipment or process. EPA program managers maintain that as a result, more data may be available now than in the past. However, EPA did not define what constitutes a “reasonable possibility” that emissions will trigger federal NSR requirements in the final rule, so companies might not apply this provision consistently and are, in effect, policing themselves. As several state and local representatives pointed out, this makes it difficult for EPA, state and local air quality agencies, and the public to monitor compliance with NSR, potentially leading to increased emissions and enforcement actions. Similarly, NAPA reported that such self-policing could lead to implementation problems and inadequate reporting of information and recommended that EPA carefully oversee the calculation of emissions increases resulting from facility changes and that sources not be allowed to “self-police.” EPA program managers take issue with the conclusion that self-policing is inherently wrong and point out that many environmental programs provide such self-policing mechanisms. Furthermore, the rule states that if a company determines there is a reasonable possibility a facility change could trigger NSR, it must make the record of the determination as well as the emissions records related to the change available to state or local agency officials or the public upon request. But the rule is unclear how the public will know about the changes or access the company’s on-site records. According to industry representatives, some companies will keep records of all reasonable possibility determinations to limit their legal risks, and some will proactively reach out to local communities before undertaking facility changes because they want to maintain good relations in these communities. Nevertheless, this lack of clarity could potentially hinder enforcement and monitoring activities. It could also pose administrative problems for companies, should the public begin requesting information directly from them—especially if the information contains sensitive business data that the company is entitled to protect. EPA is currently considering comments it received on the reasonable possibility provision as part of its decision to reconsider portions of the final rule. The agency plans to determine whether it will make any changes by the end of October. While EPA enforcement officials assessed the potential impact of the December 2002 final and proposed rules on the enforcement cases against coal-fired utilities and made changes before announcing the rules, these officials and key stakeholders believe that settlement of some cases was delayed because of the prospect that the definition of routine maintenance could be revised in a way that would improve industry’s legal position. Furthermore, the announced August 2003 rule exempting the replacement of certain equipment from NSR requirements—the fundamental basis for most of the coal-fired utility cases—also likely will discourage utilities from settling at least some of the remaining cases. The rule may also affect judges’ decisions regarding whether the companies have to install pollution controls, jeopardizing the expected emissions reductions. Overall, as a result of the final rule, the public may have less assurance that they will have notice of, and information about, company plans to modify facilities in ways that affect emissions, as well as less opportunity to provide input on these changes and verify they will not increase emissions. In some but not all cases, state or local regulations may require companies to continue to provide the public with this information and opportunities for input, or companies may do so voluntarily. However, the public will not have consistent access and input unless EPA better (1) defines the criteria companies use to determine if there is a reasonable possibility a facility change will trigger NSR requirements and (2) explains how the agency will ensure the public can access company documentation on such decisions and the resulting emissions. Otherwise, it will be more difficult not only for the public but also for EPA and state and local air quality agencies to ensure companies are complying with the federal NSR program and not increasing emissions in ways that affect localities’ air quality and public health. Recommendations for To better ensure the ability of federal, state, local, and public entities to Executive Action monitor facility emissions and NSR compliance, we recommend that the EPA Administrator better define what constitutes a “reasonable possibility” that emissions after a facility change will trigger NSR requirements, require that companies maintain documentation on all “reasonable possibility” determinations, and determine, with state and local air quality agencies, how to ensure public access to company’s on-site information on facility changes and emissions. We provided DOJ and EPA with an opportunity to review and comment on a draft of this report. We subsequently received comments from both agencies. DOJ advised that it could not address the accuracy of, or otherwise comment on, the statements of EPA officials contained in the report. The agency did not address or comment on those portions of the report concerning public access to emissions data that GAO discussed exclusively with EPA. DOJ also advised that its position on the final and proposed regulations discussed in the report are contained in its legal filings in the power plant cases, and GAO was provided with a copy of those filings. Since EPA’s December 2002 announcement of the final and proposed NSR rule changes, DOJ stated that it has continued to prosecute these cases vigorously and has also achieved settlements with four companies. DOJ also reiterated that its position as to the potential impact of the NSR rule announced in August 2003 has always been consistent and is reflected in its court filings—“that the rule only governs prospective conduct and should not impact the liability of companies who violated the law in the past.” EPA generally agreed with the report’s characterization of the NSR revisions’ potential impact on the ongoing enforcement cases. In terms of the revisions’ impact on public access to information about facility modifications and emissions, however, the agency maintains the revisions, at a minimum, will not change, and most likely will increase, the amount of information available. According to EPA, before the revisions, companies were not obtaining federal NSR permits with their requisite public participation requirements for the types of changes that would be affected by the revisions, for several reasons. For example, companies could avoid federal NSR requirements for such changes by offsetting emissions increases with emissions reductions elsewhere in the facility (a process known as netting). EPA also maintains that even if these changes were not subject to federal NSR permitting requirements, they were subject to state and local permitting and public participation requirements in many cases, and that the NSR revisions would not change these underlying state and local programs. In addition, EPA said that facilities choosing to use a plantwide emissions limit have new and additional reporting requirements that could increase the information available, as we also point out in the report. Furthermore, the agency maintains that in the past, companies calculated the expected emissions from a modification and determined whether the emissions would increase enough to trigger federal NSR requirements. If the NSR requirements were not triggered, the companies did not have to keep records of the calculations. Now, companies can take the extra step of determining that even if the calculations do not show a significant enough increase, there is a “reasonable possibility” of an increase and companies must keep records on site supporting this determination. For our work, however, we compared the federal NSR requirements before and after the revisions and determined that the changes to these requirements could limit assurance that the public has access to information on facility changes and emissions. We did not have information on, and did not try to account for, the extent to which companies were actually triggering NSR requirements before and after the rule, or the effect this had on available information. Based on discussions with a number of state agencies and the national association representing them, among other stakeholders, as to whether state and local programs will continue to require permits and public notice for changes not subject to the federal program, we determined that the extent varied considerably across states and localities. For example, two states said they did not allow netting. Furthermore, a number of states indicated that even if such changes had been subject to their programs in the past, they might not be in the future because states and localities are facing pressures to modify their programs to match the federal NSR revisions and to not have more stringent requirements. As to GAO’s recommendations, EPA did not take a formal position on either the recommendation calling for additional guidance on reasonable possibility determinations or for the maintenance of all records on these determinations. The agency is still evaluating public comments it received on these issues as part of its agreement to reconsider portions of the NSR revisions and does not expect to make a final decision on the reconsideration process until the end of October 2003. EPA did agree with our recommendation on ways to better ensure public access to information on facility changes and emissions that companies maintain on site. DOJ and EPA also recommended a number of technical changes to the report, which we incorporated, as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 10 days from the report date. At that time, we will send copies to the EPA Administrator, the Attorney General, interested congressional committees, and other interested parties. We will also make copies available to others upon request. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staffs have any questions, please call me at (202) 512-3841. Karen Keegan, Eileen Larence, Jeff Larson, and Lisa Turner made key contributions to this report. Nancy Crothers, Mike Hix, and Laura Yannayon also made important contributions. Our objectives were to determine (1) whether EPA and DOJ assessed the potential impact that issuing the final and proposed rules in December 2002 would have on enforcement cases pending against coal-fired utilities and what the assessments indicated, and (2) what effect, if any, the final rule might have on public access to information on facility changes and the resulting emissions. To respond to the first objective, we interviewed both current and former EPA officials and current DOJ officials that were involved in discussions about the impact of the revisions on the relevant enforcement cases. These officials included the former Principal Deputy Assistant Administrator for EPA’s Office of Enforcement and Compliance Assurance, the former Director of EPA’s Office of Regulatory Enforcement, the current Director of EPA’s Air Enforcement Division, and the DOJ Deputy Assistant Attorney General for Environment and Natural Resources. We also submitted written document requests to both agencies, asking that they provide GAO with all documents referring to, relating to, or describing the assessments of the potential impact of the NSR revisions on the pending enforcement cases and discussions between officials from EPA and attorneys from DOJ concerning these assessments. In the case of DOJ, the agency’s enforcement staff acknowledged that in July 2002, they had prepared an internal evaluation, as backup material for testimony, that summarized EPA’s public announcement the previous month concerning proposed NSR rule changes the agency was considering, the content of some of the potential revisions, and the relevance of those changes to filed enforcement cases. The DOJ enforcement officials were concerned about providing us a copy of this document primarily because it could impact the ongoing litigation of the cases. In the case of EPA, the officials acknowledged that they, too, had prepared assessments, and they discussed the general content of some of them with us. They also provided us access to (but not copies of) the assessments supporting the December 2002 final rule. The officials had concerns similar to those of DOJ about (1) describing all of the details about the changes made to the rule as a result of the assessments, and (2) providing us access to the assessments concerning the December 2002 proposed rule and the August 2003 rule. We did not further pursue access to this information because we had sufficient data to respond to our objectives, and it is GAO’s policy, except in limited circumstances, not to conduct work that would involve analyzing, evaluating, or commenting on specific issues that are pending before the courts. To respond to the second objective, we analyzed the December 2002 final rule to determine what provisions could impact public access to information about facility changes and their associated emissions. We interviewed the Director of EPA’s Information Transfer and Program Integration Division in the Office of Air Quality Planning and Standards, the Director of EPA’s Air Enforcement Division, and attorneys in EPA’s Office of General Counsel regarding the interpretation of relevant provisions of the rule and the potential effects of these provisions on public access. We also obtained the views of key stakeholders that could be affected by changes in public access to such information. To ensure we captured a wide cross section of interests, we focused on groups identified by EPA officials as key stakeholders, members of EPA’s CAA Advisory Council, national level groups that have testified before Congress on NSR and CAA issues over the last several years, national level groups that submitted comments to EPA in response to the agency’s request for public comment on its June 2001 NSR 90-Day Review Background Paper (many of these were identified in EPA’s June 2002 NSR Report to the President), and trade associations representing those industries EPA identified as being most affected by NSR. Stakeholders included officials from the American Forest and Paper Association, Clean Air Trust, Georgia Pacific Company, National Petrochemical and Refiners Association, Natural Resources Defense Council, New York State Attorney General’s Office, Rockefeller Family Fund’s Environmental Integrity Project, and the professional association representing State and Territorial Air Pollution Program Administrators and the Association of Local Air Pollution Control Officials. We conducted our work between August 2002 and October 2003 in accordance with generally accepted government auditing standards.
Recent Environmental Protection Agency (EPA) revisions to the New Source Review (NSR) program--a key component of the federal government's plan to limit harmful industrial emissions--have been under scrutiny by the Congress, environmental groups, state and local air quality agencies, the courts, and several industry groups. The revisions more explicitly define when companies can modify their facilities without needing to obtain an NSR permit or install costly pollution controls, as NSR requires. GAO was asked to determine (1) whether EPA and the Department of Justice (DOJ) assessed the potential impact of the revisions on the ongoing enforcement cases against coal-fired utilities and, if so, what the assessments indicated; and (2) what effect, if any, the revisions might have on public access to information about facility changes and their resulting emissions. EPA staff assessed the potential impact of the NSR revisions on the utility enforcement cases and, according to current and former EPA enforcement officials, determined that some of the revisions could affect the cases. EPA staff discussed the potential effects of the revisions with DOJ. In part as a result of the assessments, EPA changed some of the revisions before issuing them as final and proposed rules in December 2002. Specifically, EPA changed the content and wording of some of the provisions included in the final rule and determined that the rule would not affect the cases. However, EPA enforcement officials were very concerned that the proposed rule--addressing when a company could consider a facility change "routine maintenance, repair, or replacement" and exempt from NSR--could have a negative impact on the cases. The concern was that proposing one specific definition for this exclusion that differed from the way the agency had applied it in the past could affect the cases' outcome. Consequently, EPA instead proposed several alternative definitions--different cost thresholds below which a company could make a change that is exempt--for public comment. Nevertheless, some of the enforcement officials and stakeholders believe that industry's knowledge that EPA could be defining the exclusion in terms more favorable to industry delayed some settlements while the rule was being developed, jeopardizing expected emissions reductions. Subsequently, in August 2003, despite seven ongoing cases, EPA announced a final rule specifying a 20 percent cost threshold below which a company could make certain changes and consider them routine replacement and exempt from NSR. EPA and DOJ maintain that the rule will not affect the cases because it applies only to future changes. But some EPA enforcement officials and stakeholders are concerned that even if judges find companies to be in violation of the old rule, judges could be persuaded, when setting remedies, to not require the installation of pollution controls--limiting emissions benefits--because under the 20 percent threshold, most of the facility changes in dispute would now be exempt. Certain provisions in the December 2002 final rule could limit assurance of the public's access to data about--and input on--decisions to modify facilities in ways that affect emissions. This would make it more difficult for the public to monitor local emissions, health risks, and NSR compliance. Under the rule, fewer facility changes may trigger NSR and thus the need for permits and related requirements to notify the public about changes and to solicit comments--unless state and local air quality agencies have their own permit and public outreach rules. However, the scope of these state and local rules varies widely. Also under the rule, companies will now determine whether there is a "reasonable possibility" a facility change will increase emissions enough to trigger NSR--in effect policing themselves. But EPA has not defined "reasonable possibility," required that companies keep data on all of their reasonable possibility determinations, or specified how the public can access the data companies do keep on site.
GSA’s existing government-wide telecommunications program is the successor to a series of programs that have provided data services and long-distance telecommunications to the federal government. In 1998 and 1999, GSA awarded two sets of contracts under the FTS2001 program, which was designed to meet agency needs for various telecommunication services, including long distance voice, video, and data services. In 2007, GSA awarded successor contracts through an effort called Networx. These contracts, which had an estimated combined value of $20 billion, included a wider array of services provided through two sets of contracts with differing characteristics: GSA awarded Networx Universal contracts to AT&T, Verizon Business Services, and Qwest Government Services. Networx Universal offers voice and data services, wireless services, and management and application services, including video and audio conferencing, as well as mobile and fixed satellite services, with national and international coverage. Networx Universal contracts were set to expire in March 2017; however, each participating vendor received a contract extension through March 2020. GSA awarded Networx Enterprise contracts to AT&T, Verizon Business Services, Qwest Government Services, Level 3 Communications, and Sprint Nextel. Networx Enterprise offers services similar to those of Networx Universal, with a focus on those that are Internet-based, and does not require coverage of as large a geographic area as does Networx Universal. Networx Enterprise contracts were set to expire in May 2017; however, each participating vendor, except one, received a contract extension through May 2020. EIS is the replacement for Networx and all of GSA’s local and regional telecommunications contracts. GSA intends for EIS to address federal agencies’ global telecommunications and information technology infrastructure requirements. It is the first set of contracts to be developed under GSA’s Network Services 2020 (NS2020) strategy. GSA plans for EIS to provide agencies with traditional and emerging services to meet current and future requirements, by: simplifying the government’s process of acquiring information technology and telecommunications products and services; providing cost savings to each agency through aggregated volume buying and pricing and spending visibility; enabling the procurement of integrated solutions; promoting participation by small businesses and fostering competition; offering a flexible and agile suite of services supporting a range of government purchasing patterns into the future; and providing updated and expanded security services to meet current and future government cybersecurity requirements. In addition, GSA has identified several benefits that EIS is expected to provide to the agencies that participate in its telecommunications programs. These projected benefits include: streamlined contract administration, including catalog-based offerings; future-proof contracts (price management mechanism, 15-year period of performance); simplified pricing, including simplified contract line item number structure; and enhanced management and operations support. GSA issued its request for proposals (RFP) for EIS in October 2015. Vendors’ responses to the RFP were received by February 2016. According to FAS officials, GSA held discussions with offerors in 2016 and received proposals in December of that year. However, GSA determined that none of the proposals met the defined requirements. After another round of discussions, GSA received updated proposals on March 31, 2017. While GSA determined that these revised proposals met the requirements, a pre-award protest was filed on April 17, 2017. The protest was then withdrawn in May 2017. On August 1, 2017, GSA announced that it had awarded EIS contracts to ten vendors. GSA expects agencies to issue notices to vendors providing a fair opportunity to be considered for a task order within 2 months of contract awards. According to GSA’s plans, the transition to EIS is expected to be completed by March and May 2020, when the current Networx contracts expire. A timeline of the transition to EIS is provided in figure 1. Central to the successful transition from Networx to EIS are transition planning and execution activities that involve GSA, federal agencies, and Networx and EIS contractors. GSA serves as the facilitator for all transition management activities and is using contract support to assist in tracking transition activities in order to avoid delays and other problems that can arise throughout the process. To assist agencies with their transitions from the Networx contracts, GSA is working with representatives of the federal agencies, both directly and through an Infrastructure Advisory Group. This group is a collaborative body for aligning government-wide and agency missions with GSA strategies for acquiring and providing the future technology infrastructure services that will enable them. GSA’s primary responsibility is to provide program management for both Networx and EIS. As part of this, it is responsible for conducting government-wide strategy and project management; collecting and validating an inventory of active services on all expiring contracts; providing tailored assistance to agencies for transition planning and help with contractor selection and ordering; tracking and reporting the use of metrics that convey the relative complexity and transition progress; and providing customer support, training, and self-help tools and templates. According to FAS officials, GSA’s approach to the current transition includes providing direct assistance to agencies, with GSA performing some transition tasks for small agencies and offering contractor assistance to larger agencies. GSA developed two contracting vehicles to support its efforts: (1) a Transition Coordination Center vehicle that includes assistance with inventory validation, transition planning, and solicitation development; and (2) a Transition Ordering Assistance vehicle that addresses tasks including requirements development and source selection assistance, and proposal evaluation. The Coordination Center vehicle was put in place in January 2016, while the Ordering Assistance vehicle was initially awarded in September 2016, but was not finalized until March 2017, due to a bid protest. GSA’s customer agencies—those federal agencies acquiring services through the Networx program—have principal responsibility for the transition. These agencies are responsible for coordinating transition efforts with the incumbent and EIS contractors to ensure that existing services under Networx are disconnected and that new services are ordered. According to GSA, customer agencies’ responsibilities under EIS include: identifying key personnel, chiefly a Senior Transition Sponsor, Lead Transition Manager, and Transition Ordering Contracting Officer; engaging expertise from Chief Information Officers, Chief Acquisition Officers, and Chief Financial Officers to build an integrated transition team of telecommunications managers, acquisition experts, and financial staff; developing a financial strategy and budget for transition costs beginning in fiscal year 2017; analyzing and confirming the accuracy of the inventory of active services that must be transitioned; developing an agency transition plan by October 2016 that describes the agency’s technological goals, transition schedule, strategy for awarding task orders on EIS for transitioning services, and any constraints or risks; and preparing solicitations for task orders to be released immediately upon award of EIS contracts. At the agencies we selected, the staff responsible for the transition were part of their agency’s Office of the Chief Information Officer (OCIO). We have previously reported on efforts by GSA and agencies to transition from one telecommunications program to another. In a June 2006 report, we identified a range of transition planning practices that can help agencies reduce the risk of experiencing adverse effects of moving from one broad telecommunications contract to another. We developed these practices through an analysis of available literature on telecommunications transitions and interviews with those having experience in telecommunications transitions, including industry experts, telecommunications vendors, and private sector companies. These planning practices are to: Establish an accurate telecommunications inventory and an inventory maintenance process. Identify strategic telecommunications requirements and use the requirements to shape the agency’s management approach and guide efforts when identifying resources and developing a transition plan. Establish a structured management approach that includes a dedicated transition management team, key management processes (project management, configuration management, and change management), and clear lines of communication. Identify the funding and human capital resources that the transition effort will require. Develop a transition plan that includes objectives, measures of success, a risk assessment, and a detailed timeline. Each of these transition planning practices consists of various activities. For example, developing a transition plan consists of (1) identifying and documenting objectives and measures of success; (2) determining risks that could affect success; and (3) defining transition preparation tasks and developing a timeline for these tasks. That same June 2006 report evaluated the progress of six selected agencies in preparing for the transition from FTS2001 to Networx and found that the agencies generally had not implemented the practices, but were planning to do so. We recommended, among other things, that GSA develop and distribute guidance to its customer agencies to ensure that the identified transition planning practices were used. GSA agreed with our recommendations and subsequently issued guidance related to several of the identified practices. Further, in 2008, we reported on the extent to which six selected agencies were following the transition planning practices during the Networx transition. We noted that the agencies were generally implementing the practices, but three of them had not fully implemented some of the key activities of the practices and were not planning to do so. For example, one agency was using key project management processes in its transition planning efforts, and five had plans to use them. Regarding identifying human capital needs, two agencies had identified their resource needs, and three had plans to identify them. Also, one of the agencies did not plan to identify its human capital needs. We made recommendations to those agencies that had not implemented key practice activities and did not plan to do so, focused on addressing the gaps in transition planning. One agency implemented the recommendation we made to it, one implemented one of the two recommendations directed to it, and one agency implemented one of the seven recommendations we made to it. In 2013, we reported on factors that had contributed to the delay in the Networx transition and the consequences of the delay. We pointed out that weak project planning and complex acquisition processes were factors that had contributed to the delay. We also reported on the extent to which GSA was documenting and applying lessons learned to prepare for the current EIS transition. In comparing GSA’s lessons-learned process with six key practices necessary for a robust lessons-learned process, we noted that GSA had fully satisfied three of the six key practices. Specifically, it had collected, analyzed, and validated lessons learned from the previous Networx transition. However, GSA had not fully satisfied the remaining three practices: (1) sharing lessons with its customer agencies, (2) archiving the lessons learned, or (3) prioritizing them to ensure that resources are applied to areas with the greatest return on investment. For example, GSA shared briefings of lessons learned with agencies and OMB; however, it did not make the information in its 2012 lessons-learned report readily available to agencies and other transition stakeholders. As a result, we recommended that GSA coordinate with the Office of Personnel Management (OPM) for future transitions to examine potential government-wide expertise shortfalls. We also recommended that it provide agencies with guidance on project planning and fully archive, prioritize, and share lessons learned. As of June 2017, GSA had implemented three of the five recommendations we made. Specifically, in accordance with our recommendations, GSA had provided project planning guidance to agencies, updated its transition lessons-learned database, and prioritized its lessons learned. In addition, GSA had begun but not completed implementation of the recommendation applying lessons based on priority and available resources. GSA agreed with the recommendation regarding expertise shortfalls but had not yet implemented it. The use of lessons learned ensures that beneficial information is factored into planning, work processes, and activities. Lessons learned can provide a powerful method of sharing good ideas for improving work processes, quality, and cost-effectiveness. Key lessons-learned practices, as described in our earlier work, include disseminating lessons-learned information to all involved parties. This practice emphasizes that lessons learned should be disseminated through a variety of communication media, such as briefings, bulletins, reports, e-mails, websites, database entries, revised work processes or procedures, and personnel training. In addition, according to the Project Management Institute’s Guide to the Project Management Body of Knowledge (PMBOK® Guide), distributing lessons learned is important because they can provide insights on both the decisions made regarding communications issues and the results of those decisions in previous similar projects. The knowledge can be used to plan the communication activities for the current project. GSA compiled lessons learned from previous telecommunications transitions, including 35 lessons that described actions that agencies should take during future transitions. Two of these lessons address issues that are not appropriate for the current transition, leaving 33 lessons for agencies. GSA subsequently disseminated a number of these lessons learned to agencies via various sources, including transition plans and guidance. For example, to prepare for the current transition from Networx to EIS, GSA developed plans, documents, presentations, and other transition-related guidance sources in which it discussed lessons learned resulting from the prior transitions. Table 1 describes the transition guidance for EIS that GSA provided to agencies at two intervals: by December 2016, when GSA had initially planned to issue the EIS contracts, and between January and April 2017, to account for new guidance issued after contract awards were delayed. However, while the transition plans and guidance that GSA issued to agencies included discussions of lessons learned, they did not do so comprehensively or consistently. First, none of these sources addressed all 33 of the agency-focused lessons that GSA had identified. For example, the 2012 Lessons-Learned Report addressed 19 lessons (the most of any source), but did not address the remaining 14. The EIS Acquisition and Transition presentation to small agencies addressed 6 of the 33 lessons. Second, even when GSA guidance addressed a previous lesson, it did not always include all aspects of the lesson. Overall, when GSA’s guidance addressed a lesson, it more frequently addressed the lesson partially rather than fully. For example, one lesson called for agencies to recognize the possibility that they might change vendors and to develop plans to mitigate the risks from such a change. However, although one guidance source (the 2012 Lessons-Learned report) told agencies to plan for a change in vendors, it did not specify that they plan to mitigate associated risks. In addition, another lesson stressed that the coordination of service disconnects and activations by different vendors was essential. One guidance source (GSA White Paper: NS2020 Transition Strategy) discussed the need for coordinated disconnects, but did not discuss activations by different vendors. Figure 2 lists the number of lessons that were fully, partially, or not addressed within each of GSA’s various transition guidance documents. When the information provided in GSA’s guidance is considered collectively, significant gaps in communicating previous lessons learned are evident. In the initial guidance released by December 2016, 15 lessons were fully addressed in the body of the guidance, 9 lessons were partially addressed, and 9 lessons were not addressed at all. Additional guidance that GSA released between January and April 2017 addressed more lessons learned, but did not include all of the lessons learned that were not previously disseminated. In total, the 12 guidance sources released by April 2017 fully addressed 17 of the 33 lessons learned and partially addressed another 9. The guidance sources did not address 7 lessons, including those related to agencies (1) bearing the costs associated with contract extensions resulting from delays in their contract selections, transition planning, or ordering; and (2) not assuming that a transition to a new contract with the same vendor will be easier than a change in vendors. Figure 3 shows the collective number of lessons that were fully, partially, and not addressed in the GSA guidance. In addition, appendix II describes each lesson learned and the extent to which it was addressed in the guidance. FAS officials responsible for the transition cited several reasons for not fully addressing lessons learned from the prior telecommunication transitions in the planning and guidance documents for the EIS transition. These reasons included: Lessons were originally developed to encourage agencies to consider the actions; however, GSA has since changed its thinking on a number of these lessons learned and believes they are no longer applicable or relevant during the transition to EIS. Several lessons are not specifically addressed in current guidance because the agencies are not at the point in the transition where that level of detail would be useful. We agree that two of the 35 lessons—those addressing the ordering of wide area network and trusted Internet connection services—are not appropriate to the current transition due to changes in the proposed contracts. However, we do not agree with many of GSA’s assessments of the lessons that were not addressed. For example, one lesson that GSA said was not applicable in December 2016 addressed being prepared for the possibility that the agency’s current vendor will not be chosen for the new contract. Because the EIS contracts had not been awarded, this was still a possibility which agencies should consider. Another lesson that was not addressed is the need to allow service changes during the transition—an issue we maintain is still relevant due to the length of time needed to complete a transition. In addition, one lesson that GSA said was more appropriate for later in the transition states that agency contracting officers should meet with GSA contracting officers for advice. In our view, however, this lesson is appropriate for all phases of transition planning efforts. By not including all lessons learned in its plans and guidance to agencies, GSA limits agencies’ ability to plan for actions that will need to be taken later in the transition. As a result, the risk is increased that agencies could repeat prior mistakes, including those that could result in schedule delays or unnecessary costs. As discussed earlier, we previously identified a set of planning practices that can mitigate the risks associated with a complex telecommunications transition. These practices, which we reported on in 2006 and 2008, call for agencies to: 1. Develop asset and service inventories. 2. Incorporate strategic needs into transition planning. 3. Develop a structured transition-management approach. 4. Identify resources necessary for the transition. 5. Establish transition objectives, risks, and measures of success. However, as of May 2017, none of the five agencies selected for our review had fully implemented all five of the practices. These agencies (DOL. DOT, SEC, SSA, and USDA) had generally addressed parts of all five practices and one agency had fully implemented one practice. The selected agencies provided various reasons for not fully adopting the practices, ranging from their uncertainty due to delays in awarding the EIS contracts and the lack of specific direction and planned contractor assistance from GSA to implement the practices, to having plans to implement practices later as part of established agency procedures for managing IT projects. However, going forward, if the agencies do not fully implement the practices, they will be more likely to experience the kinds of delays and increased costs that occurred in previous transitions. To accomplish Practice 1—developing an accurate inventory of current telecommunications assets and services—the transition planning practices we previously identified state that agencies should complete two activities. First, agencies should have a detailed and complete transition inventory that reflects all of their facilities, components, field offices, and any other managed sites. The inventory should include information such as telecommunications services, traffic volumes, equipment, and applications being used. In addition, agencies should use their transition inventories to identify opportunities for optimizing their current technology during strategic planning. Second, agencies should have a documented inventory-maintenance process that can be used to ensure that inventories remain current and reflect changes leading up to, during, and after the transition. An inventory-maintenance process can ensure that changes are captured and allow agencies to verify vendor bills against their inventories throughout the life of the contract. Consistent with the first activity in this practice, all five selected agencies had begun to develop service inventories. However, only one of the agencies had completed its inventory. Specifically, SEC had identified an inventory that included all agency components receiving telecommunications services, validated the inventory with data provided by GSA, and demonstrated that it had adequate procedures for ensuring the completeness of the inventory. The four other agencies had developed telecommunications inventories, but had not verified that the inventories were complete. SEC was also the only agency to complete the second activity related to having a documented inventory maintenance process. In this regard, it had documented procedures for updating its inventory. A second agency, SSA, had established procedures for the reconciliation and maintenance of local and long distance telecommunications services, but not for other contracted services. The remaining three agencies did not have documented procedures requiring inventory updates. Table 2 summarizes the extent to which transition planners at the five agencies had implemented the practice to establish telecommunications inventories. The four agencies that did not have complete inventories or procedures to update their inventories cited several reasons for their status. Officials responsible for the transitions at the three agencies with components (Labor, DOT, and USDA) said that they have decentralized inventory maintenance among their components. However, none of these agencies has written policies that require components to develop a complete inventory and keep them updated. As a result, some of the agencies’ components could demonstrate that their inventories were complete, while other components could not. In addition, SSA’s Division Director for Integrated Telecommunications Management (who is within the agency’s Office of the Chief information Officer (OCIO)) attributed that agency’s delay in developing a complete inventory and a maintenance process to GSA not providing promised contractor assistance with validating its inventory. However, while the contracting vehicle for supporting later planning tasks was delayed due to the bid protest, the vehicle that GSA provided for agency assistance with inventory validation had been in place since January 2016. Two of the four agencies identified several actions they plan to take to address these gaps. USDA and DOT officials responsible for their agencies’ transitions (who are within their departments’ OCIOs) said they plan to develop a department-wide process that components will be expected to use. DOT officials also discussed the possibility that the Department would centralize the inventory maintenance process in the future. The two agencies, however, did not have established deadlines for completing these actions. Further, with regard to Labor, officials responsible for its transition said they did not plan to develop a policy or procedures governing how components should maintain an inventory of telecommunications assets and believed such an approach to be unnecessary. Without complete and accurate telecommunications inventories, the selected agencies are less likely to be prepared to address strategic considerations and may be unable to avoid unnecessary transition delays associated with inventory identification. Additionally, without a documented inventory-maintenance process, the agencies may not consistently and accurately capture the changes to their telecommunications inventories during and after transition, thus, hindering their ability to ensure that they are billed appropriately by the vendor or to determine areas for optimization and sharing of telecommunications and IT resources across the agency. To accomplish Practice 2—performing a strategic analysis of telecommunications requirements— the transition planning practices we previously identified state that agencies should complete four activities. First, agencies should use their inventories of existing services to determine current and future telecommunications needs. Next, they should use the transition as an opportunity to identify areas for optimization or sharing of telecommunications and IT resources across the agency. Agencies should also evaluate the costs and benefits of introducing new technology and alternatives for meeting the agency’s telecommunications needs. Finally, they should align the identified needs and opportunities with the agency’s mission, long-term IT plans, and enterprise architecture plans Two of the selected agencies (SSA and USDA) had partially addressed the first activity, related to determining future telecommunications needs. Specifically, SSA documented future requirements based on interviews with stakeholders. However, SSA did not document that it based the identified needs on its existing inventory. In addition, USDA created a preliminary set of future telecommunications needs. However, these needs had not been finalized. According to officials responsible for USDA’s transition, finalization is expected in October 2017, which will allow time for USDA components and vendors to provide feedback that will be integrated into the preliminary set of future telecommunications needs. The remaining three agencies (DOL, DOT, and SEC) had not begun to identify future needs based on their current inventories. With regard to the second activity of the practice, two agencies (DOL and SSA) had completed efforts to identify areas for the optimization and sharing of telecommunications and IT resources. In addition, one agency had partially implemented this activity. Specifically, USDA had identified options for optimization, but as of July 2017, it was still working with its components and vendors to evaluate the options. According to agency officials, they expect to reach a decision on options in October 2017, but this schedule is not documented. With regard to the two other agencies, DOT’s future plans were unclear because it was awaiting IT investment management approval. Further, officials with SEC stated that the commission would address this practice later in 2017. None of these agencies had documented plans or timeframes for completing this activity. Consistent with the third activity of this practice, USDA had evaluated the costs and benefits of new technology and alternative options for meeting its telecommunications needs. SSA had partially addressed this activity in that it had begun to evaluate the cost and benefits of upgrading agency bandwidth, but had not yet evaluated costs and benefits for introducing other new technology and alternatives for meeting the agency’s telecommunications needs. SSA officials said they planned to conduct such an analysis at a later time but had not documented plans to do so. The other three agencies had not yet addressed this activity. Finally, in addressing the fourth activity, three of the five agencies had begun to determine whether their needs and opportunities were aligned with their mission, long-term IT plans, and enterprise architecture plans, although they had not yet completed these activities. Specifically, DOT had demonstrated that its identified needs and opportunities aligned with its mission. However, it did not demonstrate a similar alignment with its long-term IT plans and enterprise architecture plans. In addition, SSA had begun to align identified transition needs and opportunities with the agency mission and long-term IT plans. However, it had not fully identified its transition needs or evaluated those needs against its enterprise architecture. SSA also had determined that its identified telecommunications needs aligned with its long-term plans, as they related to two ongoing modernization projects. However, the agency did not show that the needs aligned with its enterprise architecture. USDA also had aligned identified needs with its mission and enterprise architecture plans. However, the agency had not aligned identified needs and opportunities with its long-term IT plans. The remaining two agencies, DOL and SEC, had not yet implemented this practice. Table 3 summarizes the extent to which the five agencies performed a strategic analysis of their telecommunications requirements. ● Practice activity has been fully implemented. ◒ Agency has partially implemented practice activity. ○ Agency has not implemented practice activity. Three of the agencies attributed their limited progress on this practice to their use of established agency IT management processes and their related time frames. DOL transition officials (who are within the department’s OCIO) stated that they had begun to manage the transition within the agency’s systems development life cycle process, but it was too early for most planning activities to be completed. DOT officials stated that their agency was conducting a network assessment, causing a delay in fulfilling this planning practice. The officials also said that their agency’s specific management plans had not been finalized because the agency intended to manage the transition as a project within its IT investment management process; however, they had not yet gotten approval to do so. Further, officials from SEC’s OCIO stated that they were following internal agency best practices for managing a project and adherence to the systems development life cycle. Additionally, officials at three agencies described plans to address this practice at a later time. DOL officials stated that they planned to issue a request for information to ask vendors what new technologies are available to meet the Department’s needs and to suggest changes to the existing telecommunications infrastructure. When we discussed this issue in December 2016, SEC officials stated that some actions could not be completed until GSA awards the EIS contract because they do not yet know what services will be available or their prices. In addition, SSA’s telecommunications management division director stated that several activities were initially delayed due to GSA not providing promised contractor assistance, which required the agency to obtain assistance on its own. However, the director added that SSA’s transition is now on schedule, and it has begun addressing this practice using contractor support. While the selected agencies’ established IT management processes can contribute to the fulfillment of the practice related to identifying strategic needs, the limited time available for the transition leaves agencies with a short window in which to make such determinations. As a result of the delays in identifying their needs, agencies will have less time to implement the resulting changes while meeting the deadlines for transitioning off of the Networx contracts. Also, agencies that do not fully assess the costs and benefits of alternatives for meeting their telecommunications needs may not be taking full advantage of the transition as an opportunity to optimize their telecommunications services. Further, agencies that do not identify areas for optimization and sharing miss opportunities to upgrade their telecommunications services, or to shift service to more cost-effective technology. If agencies do not incorporate strategic requirements into their planning, they risk making decisions that are not aligned with their long-term goals. Without aligning needs and opportunities with missions and plans, agencies risk missing opportunities to use the new contract to address their highest priorities. To accomplish Practice 3—establishing a structured transition management approach— the previously identified transition planning practices state that agencies should complete three activities. They should establish a transition management team to be involved in all phases of the transition, and in clearly defining the responsibilities for key transition activities, such as project management, asset management, contract and legal expertise, human capital management, and information security management. Agencies should also ensure that all transition team members are clear on who is involved and how transition plans and objectives will be communicated. Finally, agencies should ensure that they use established project management, configuration management, and change management processes during the transition. All five selected agencies established transition-management teams, as outlined in the first activity of Practice 3. Transition plans written by the agencies identified management teams and stakeholders responsible for their transitions. However, of the five agencies, only SSA had defined all of the roles and responsibilities identified in the practice. For example, DOT and SEC defined roles for project and information security management and contract expertise, but did not define roles for asset and human-capital management and legal expertise. DOL and USDA defined roles for project management and contract expertise, but did not define roles for asset, human capital, and information security management, and legal expertise. The selected agencies generally had made more limited progress on the second activity of Practice 3, regarding communicating their transition plans. SSA had implemented this practice, while three other agencies had not yet done so. One other agency, SEC, had partially implemented the practice. Specifically, it had developed a plan that identified those who are to be involved in the transition. However, the plan did not address other key aspects of this practice, including identifying key local and regional transition officials and points of contact responsible for disseminating information to employees and working with the vendor to facilitate transition activities. Four of the five agencies had begun using the types of management processes described in the third activity in Practice 3. Specifically, DOT, SEC, SSA, and USDA had demonstrated the use of established project management processes for their transitions, which included the use of schedules, task lists, and risk assessments. However, none of these agencies demonstrated that configuration or change management processes, which reduce the risks associated with technical and operational changes, were being applied to the transition. Further, the fifth agency, DOL, had not addressed this practice. Table 4 summarizes the extent to which the five selected agencies had established a structured transition management approach. The agencies cited several reasons for not fully implementing the practice. Regarding the establishment of a management team with defined roles, DOT officials stated that some stakeholders were not involved in the early stages of the transition because the department typically does not involve all stakeholders until later in the project management life cycle. The officials also said that development of a communications plan and implementation of change and configuration management would be completed at a later time. However, DOT had not documented a plan or schedule for doing so. SEC officials stated that the agency had legal and human capital expertise on the project; however, because SEC is a small agency, individuals cannot always be dedicated to a project. The officials also stated that the agency intended to handle communications through its established practice of weekly calls between IT staff and regional managers, although this process had not been documented. Additionally, the officials stated that formal change and configuration management practices apply to all of the agency’s IT projects, but did not demonstrate that those practices applied to its telecommunications transition. For SSA, the telecommunications management division director stated that, the EIS transition is part of a modernization effort that is subject to agency requirements to use established change and configuration management processes. As a result such practices will also be used in the transition. However, SSA did not document this approach. In addition, DOL officials stated that once an integrated project team has been formed, the transition effort is expected to proceed through the traditional systems development life cycle, which will address the practice activities related to project and change management. Similarly, USDA officials stated that the department plans to assign human capital resources later in the management process. Both departments’ officials also described plans to use configuration and change management processes during the transition to EIS, but those plans were not documented. These officials did not identify specific dates by which their planned actions are expected to be completed. Agencies that do not use a sound management approach risk additional financial costs, extended timelines, and disruptions to the continuity of their telecommunications systems. Further, without establishing lines of communication and identifying local and regional points of contact, agencies may lack the quality of information that is necessary for comprehensive understanding, accountability, and shared expectations among all those with transition responsibilities. Finally, by not defining key roles and responsibilities for the transition, the agencies risk extending their transition period as they attempt to assign appropriate personnel and update them on transition progress and issues. Due to the short time available to complete the current transition, effectively employing these practices will require expeditious action. To accomplish Practice 4—identifying the resources required to successfully plan for the transition—the transition planning practices we previously identified state that agencies should complete four activities. First, they should identify the level of funding needed for their transition planning efforts to ensure that needed resources are available. Next, agencies should identify the organizational need for investments and assess benefits versus costs to justify any resource requests. Agencies should also determine staffing levels that may be required throughout the transition effort, as well as ensure that personnel with the right skills are in place to support the transition effort. Skills needed for this activity are project management, asset management, contract and legal expertise, human capital management, and information security expertise. Finally, agencies should require training for those carrying out the transition or operating and maintaining newly transitioned technology. One selected agency (USDA) fully implemented the first activity in Practice 4, having identified the level of funding needed to support its transition planning. Three other agencies (DOT, DOL, and SSA) identified funding for part of the transition effort but did not identify funding to support other parts of the effort. Specifically, DOT developed a rough estimate for transition planning support, but this estimate had not been approved and it did not account for funds used for planning efforts completed prior to fiscal year 2017. Further, cost projections that DOL and SSA developed did not account for all years of transition support and the agencies did not provide evidence that the costs accounted for the transition management team. The fifth agency (SEC) had not identified its funding needs for the transition. For the second activity of Practice 4, DOL demonstrated that it had identified the funding needed for transition project management, but not for software and hardware upgrades, the establishment of a reliable inventory, or the costs and benefits to justify any resource requests. In addition, SSA and USDA identified the need for transition resources, including staffing, but did not document cost-benefit justifications for those resources. The remaining two agencies (DOT and SEC) had not implemented this activity. Three agencies also had partially implemented the third activity of the practice. Specifically, DOL, SSA, and USDA had identified staffing levels required for their near-term transition efforts. However, these agencies had not substantiated that the staff identified will be sufficient to support their entire transition efforts. DOT and SEC had not addressed this practice. With regard to the fourth activity of the practice, four agencies (DOL, DOT, SEC, and USDA) demonstrated that their agencies had provided training to transition support staff. However, none of these agencies showed that they had conducted an analysis to identify all of the training needed for the transition, including training on new equipment or services. The fifth agency (SSA) had not implemented this practice. Table 5 summarizes the extent to which the five agencies identified resources for their transitions. Officials at the five agencies generally explained that they had not developed specific resource estimates for their transition efforts because they did not have an immediate need to do so. DOL officials explained that, prior to fiscal year 2017, the department had not required additional funding for the transition because it had leveraged resources from an existing funded project. In addition, DOT officials stated that they had conducted early transition planning using existing resources. SEC officials stated that, based on past experience, the department did not require additional funding or resources to support the transition because it was considered to be a part of ongoing support funding from its operations and maintenance budget. SEC officials added that, if something in the new contracts required a change in current SEC telecommunications services, they would follow the existing agency process for requesting supplemental funding. In addition, when we discussed these topics July 2017, officials at two agencies generally expressed uncertainty about the scope of the transition because they do not know what services would be available under the new contracts. Once the contracts are awarded, according to the officials, they expected to be better positioned to plan for needed resources. SEC officials also said that they planned to take advantage of transition-related training from GSA when it becomes available. Further, SSA’s telecommunications management division director said that the agency plans staffing annually and relies on current year resource usage to plan staffing needs for future years. The director added that, if additional staffing or other resources are needed, the request would be justified to the agency’s oversight board. In addition, the director believed, based on prior experience, that SSA’s staff are adequately trained, but did not have any documented analysis to support this assertion. While it may be premature to estimate all transition-related resource needs, agencies that do not take steps to analyze their needs may be underestimating the complexity and demands of the transition effort. Additionally, without determining staffing needs for their transition efforts, agencies risk experiencing gaps in staffing, which may lead to delays and unexpected costs. Moreover, agencies that do not plan for required training are likely to incur unnecessary costs and experience delays as they try to quickly address gaps in staff competencies during the transition’s short time frame. To accomplish Practice 5—developing a plan that identifies objectives, risks, and measures of success, and that approaches the process as a critical project with a detailed timeline— the previously identified transition planning practices state that agencies should complete three activities. Agencies should first identify transition objectives and measures of success. Transition objectives should be based on a strategic analysis of telecommunications requirements and aligned with an overall mission and business objectives. Agencies should also identify agency-specific risks that could affect transition success. The importance of the risks should be evaluated relative to the agency’s mission-critical systems and continuity of operations plans. This risk assessment should include an analysis of information security risks to determine what controls are required to protect networks and what level of resources should be expended on controls. Lastly, agencies should develop a transition plan that depicts a management strategy with clearly defined transition preparation tasks and includes a timeline that allows for periodic reporting and takes into account mission-critical priorities, such as contingency plans and identified risks. This timeline should take into account priorities relative to the agency’s mission-critical systems, contingency plans, and identified risks. One selected agency (DOL) had fully implemented the first activity of this practice by identifying objectives and measures of success linked to the agency’s requirements and business needs. The remaining four agencies partially implemented the activity. Specifically, DOT, SEC, and USDA documented agency-specific transition objectives and measures of success. However, these agencies did not demonstrate that their transition objectives were based on a strategic analysis of telecommunications requirements and were aligned with the agency’s overall mission and business objectives. SSA had documented agency- specific transition objectives but had not documented measures of success. According to officials responsible for SSA’s transition, the agency plans to develop such objectives in the future, but had not established a deadline for doing so. All five selected agencies at least partially addressed the second activity of Practice 5 by identifying agency-specific risks that could affect transition success and by clearly defining transition preparation tasks. Three agencies (SEC, SSA, and USDA) identified information security risks, as called for in the practice activity. However, DOL and DOT risk assessments did not include information security risks. Moreover, none of the agencies considered continuity of operations in their risk assessments nor took into account priorities relative to their mission-critical systems. With respect to the third activity of the practice, each of the agencies at least partially defined transition preparation tasks and developed a timeline. However, the timelines did not take into account priorities relative to the agencies’ mission-critical systems, contingency plans, and identified risks. For example, in its transition plan, SEC identified multiple risks that could delay the transition, if realized, such as compliance with Office of Management and Budget security requirements. However, SEC provided no evidence that such risks or associated mitigation activities were accounted for in transition preparation tasks. Table 6 identifies the extent to which the agencies had developed plans for the transition. ● Practice activity has been fully implemented. ◒ Agency has partially implemented practice activity. ○ Agency has not implemented practice activity. Officials responsible for the transitions at the agencies we reviewed generally described their intent to complete the practices related to planning later in their transitions. DOL officials explained that the next step in their planning process would be to form an Integrated Project Team for the transition. The officials stated that, once formed, the transition effort will proceed through the traditional systems development life cycle and begin to document plans and decisions, which would contribute to the last two practice activities. With regard to DOT, an official stated that the agency was conducting a network assessment, causing a delay in completing this practice. SEC officials stated that the goal and primary measure of success for the transition would be zero downtime, but that it did not expect to trace other measures of success to business objectives. Additionally, officials at the other two agencies explained that while they had not fully implemented this practice, they plan to do so later, but did not identify a deadline. SSA’s telecommunications management division director stated that the agency could not complete a detailed transition timeline because such a timeline would have to be based on the winning contractor bids. In addition, USDA officials stated that the agency was still working on a Statement of Objectives for transition services and expected to tie transition objectives to strategic analysis of telecommunications requirements and overall business and mission objectives as part of that effort. Three agencies (DOL, DOT, and SSA) also cited GSA requirements, in part, as the reason for not completing the second and third practice activities. DOL officials stated that the agency had not completed these activities, in part, because GSA set no such expectations. A DOT official stated that tasks to include mission-critical systems and contingency plans were still being developed and that the agency had not provided detailed tasks within its transition plan because GSA did not require them. SSA’s telecommunications management division director offered a similar explanation. However, while the lack of awarded contracts constrained agencies’ ability to plan the transition in detail prior to August 2017, the limited time available to conduct the transition makes it critical that agencies conduct early planning with the information available, including information on previous transitions. In addition, agencies that do not document measurable objectives and clearly define transition tasks that take into account agency priorities and risks may find it difficult to provide those involved in the transition with clear expectations. Specifically, without measurable objectives, managers will lack information that could be used to track progress toward transition objectives and inform management decisions. Further, agencies that do not analyze risks relevant to the transition may encounter problems and delays during the process because they are not adequately prepared to mitigate such risks. GSA has identified lessons learned from previous telecommunications contract transitions, and has communicated a number of lessons to agencies through a series of plans and guidance. However, GSA did not address all of its lessons in its guidance, and several of the lessons were not communicated comprehensively. As a result, GSA made it more difficult for agencies to take advantage of the lessons. Comprehensive dissemination of lessons learned and agency planning guidance that aligns with those lessons would provide agencies with information needed to successfully plan for the complex transition effort that has already begun. The five agencies we reviewed had begun preparations for the transition to a new government-wide telecommunications contract. However, none had fully adopted the transition planning practices we previously identified that can reduce the risk of unplanned delays. Several agencies stated that they are planning to apply many of the management processes outlined in the practices to their transition efforts later this year, often in conjunction with existing IT management processes. While agencies’ use of existing IT management processes can align with a number of the identified practices, delaying the implementation of the established planning practices to follow standard IT management timeframes can also reduce agencies’ ability to fully apply the practices within the limited time available to complete their transitions. We are making a total of 25 recommendations to six agencies, including one to GSA, five to USDA, five to DOL, four to SEC, five to SSA, and five to DOT. The Administrator of General Services should disseminate the 16 agency- focused lessons learned that have not been fully incorporated in GSA guidance to the agencies involved in the current transition. (Recommendation 1) The Secretary of Agriculture should ensure that the Department’s Chief Information Officer verifies the completeness of its inventory of current telecommunications assets and services and establishes a process for ongoing maintenance of the inventory. (Recommendation 2) The Secretary of Agriculture should ensure that the Department’s Chief Information Officer completes efforts to identify future telecommunications needs and areas for optimization, identifies the costs and benefits of new technology, and aligns USDA’s approach with its long-term plans. (Recommendation 3) The Secretary of Agriculture should ensure that the Department’s Chief Information Officer identifies transition-related roles and responsibilities related to the management of assets, human capital, and information security, and legal expertise; develops a transition communications plan; and uses configuration and change-management processes in USDA’s transition. (Recommendation 4) The Secretary of Agriculture should ensure that the Department’s Chief Information Officer documents the costs and benefits of transition investments, identifies staff resources needed for the remainder of the transition, and analyzes training needs for staff assisting with the transition. (Recommendation 5) The Secretary of Agriculture should ensure that the Department’s Chief Information Officer demonstrates that the Department’s transition goals and measures align with its mission, identifies transition risks related to critical systems and continuity of operations, and identifies mission-critical priorities in USDA’s transition timeline. (Recommendation 6) The Secretary of Labor should ensure that the Department’s Chief Information Officer verifies the completeness of DOL’s inventory of current telecommunications assets and services and establishes a process for ongoing maintenance of the inventory. (Recommendation 7) The Secretary of Labor should ensure that the Department’s Chief Information Officer identifies the agency’s future telecommunications needs, completes a strategic analysis of the agency’s telecommunications requirements, and incorporates the requirements into transition planning. (Recommendation 8) The Secretary of Labor should ensure that the Department’s Chief Information Officer identifies transition-related roles and responsibilities related to the management of assets, human capital, and information security, and legal expertise; develops a transition communications plan; and uses project, configuration, and change-management processes in DOL’s transition (Recommendation 9) The Secretary of Labor should ensure that the Department’s Chief Information Officer identifies the resources needed for the full transition, develops justifications for the costs of changes to hardware and software, identifies staff resources needed for the remainder of the transition, and analyzes training needs for staff assisting with the transition. (Recommendation 10) The Secretary of Labor should ensure that the Department’s Chief Information Officer identifies transition risks related to information security, critical systems, and continuity of operations, and identifies mission-critical priorities in DOL’s transition timeline. (Recommendation 11) The Chairman of the Securities and Exchange Commission should ensure that the Commission’s Chief Information Officer identifies the agency’s future telecommunications needs, areas for optimization, and the costs and benefits of new technology; completes a strategic analysis of the commission’s telecommunications requirements; and incorporates the identified requirements into transition planning. (Recommendation 12) The Chairman of the Securities and Exchange Commission should ensure that the Commission’s Chief Information Officer identifies roles and responsibilities related to the management of assets and human capital and legal expertise for the transition; includes key local and regional officials in SEC’s transition communications plan; and completes efforts to use configuration and change management processes in the transition. (Recommendation 13) The Chairman of the Securities and Exchange Commission should ensure that the Commission’s Chief Information Officer identifies the resources needed for the full transition, justifies requests for transition resources, identifies staff resources needed for the full transition, and completes efforts to analyze training needs for staff assisting with the transition. (Recommendation 14) The Chairman of the Securities and Exchange Commission should ensure that the Commission’s Chief Information Officer completes efforts to demonstrate that the commission’s transition goals and measures align with its mission, identifies transition risks related to critical systems and continuity of operations, and identifies mission-critical priorities in SEC’s transition timeline. (Recommendation 15) The Commissioner of the Social Security Administration should ensure that the Administration’s Chief Information Officer verifies the completeness of SSA’s inventory of current telecommunications assets and services and establishes a process for ongoing maintenance of the inventory regarding services other than local and long-distance telecommunications. (Recommendation 16) The Commissioner of the Social Security Administration should ensure that the Administration’s Chief Information Officer completes identification of the agency’s future telecommunications needs and aligns its approach with the agency’s enterprise architecture. (Recommendation 17) The Commissioner of the Social Security Administration should ensure that the Administration’s Chief Information Officer uses configuration and change-management processes in its transition. (Recommendation 18) The Commissioner of the Social Security Administration should ensure that the Administration’s Chief Information Officer identifies the resources needed for the full transition, documents the costs and benefits of transition investments, identifies staff resources needed for the remainder of the transition, and analyzes training needs for all staff working on the transition. (Recommendation 19) The Commissioner of the Social Security Administration should ensure that the Administration’s Chief Information Officer completes efforts to identify measures of success for the transition, identifies transition risks related to critical systems and continuity of operations, and identifies mission-critical priorities in SSA’s transition timeline. (Recommendation 20) The Secretary of Transportation should ensure that the Department’s Chief Information Officer verifies the completeness of DOT’s inventory of current telecommunications assets and services and establishes a process for ongoing maintenance of the inventory. (Recommendation 21) The Secretary of Transportation should ensure that the Department’s Chief Information Officer identifies the agency’s future telecommunications needs, areas for optimization, and costs and benefits of new technology; and completes efforts to align DOT’s approach with its long-term plans and enterprise architecture. (Recommendation 22) The Secretary of Transportation should ensure that the Department’s Chief Information Officer identifies roles and responsibilities related to the management of assets and human capital and legal expertise for the transition; develops a transition communications plan; and fully uses configuration and change-management processes in DOT’s transition. (Recommendation 23) The Secretary of Transportation should ensure that the Department’s Chief Information Officer fully identifies the resources needed for the full transition, justifies requests for transition resources, identifies staff resources needed for the full transition, and fully analyzes training needs for staff assisting with the transition. (Recommendation 24) The Secretary of Transportation should ensure that the Department’s Chief Information Officer fully demonstrates that DOT’s transition goals and measures align with its mission; completely identifies transition risks related to information security, critical systems, and continuity of operations; and fully identifies mission-critical priorities in the transition timeline. (Recommendation 25) We provided a draft of this report to GSA, USDA, DOL, SEC, DOT, and SSA for comment. Four of the agencies (GSA, DOL, SEC, and SSA) provided written comments on the draft report, while two agencies (USDA and DOT) provided comments via email. In total, five agencies concurred with our recommendations directed to them. One agency agreed with two recommendations and disagreed wholly or in part with three other recommendations. In written comments, GSA agreed with our recommendation that it disseminate the 16 agency-focused lessons learned that had not been incorporated in GSA guidance. The agency stated that it plans to revise its guidance to include all of its agency-focused lessons learned. The agency also stated that it believes it has fully implemented a recommendation we made in 2013 regarding applying lessons based on priority and available resources. We intend to follow up with GSA and seek supporting evidence to determine whether the recommendation has been fully implemented. GSA’s comments are reprinted in appendix III. In written comments, DOL agreed with our five recommendations, noting that the department plans to develop policies governing how its components should maintain telecommunications inventories. The Department also stated that it plans to have in place a documented inventory process prior to services being awarded under the EIS contracts. DOL’s comments are reprinted in appendix IV. In written comments, SEC concurred with our four recommendations. The agency stated that it plans to take several actions to address the recommendations, including ensuring that all requirements are reflected in its plans, as well as managing the transition according to project and configuration management practices. SEC’s comments are reprinted in appendix V. In written comments, SSA agreed with two of our five recommendations to the agency. Specifically, SSA agreed with our recommendation on strategic analysis of telecommunications requirements, reporting that the agency intends to conduct an analysis of technologies and alternatives once a winning contractor bid is in place. Regarding a second recommendation—to identify transition resources—SSA agreed, but also stated that cost-benefit justifications would prove extremely difficult and that no further training is immediately necessary. We continue to believe that justifying funding requests is key to identifying the appropriate level of resources needed to conduct a transition. Also, with regard to training, SSA did not provide any evidence to show that it had analyzed its training needs. Without such information, the agency risks transition delays if it later identifies a need for training that cannot be provided quickly. SSA partially disagreed with one recommendation—to identify measures of success and risks related to continuity of operations and critical systems. Specifically, SSA agreed to use several critical milestones to monitor performance, but disagreed with the need to identify the specified risks. The agency believes those risks were already identified in one of its planning documents. However, we reviewed the planning documents and did not find any specific discussions of continuity of operations or critical systems, which are essential to assuring that the transition does not have a negative impact on the agency’s ability to complete its mission. We, therefore, believe that the recommendation is appropriate and disagree with SSA’s position. SSA disagreed with our remaining two recommendations. Specifically, it disagreed with the recommendation to implement telecommunications inventory practices. SSA indicated that its inventory was complete and that the inventory described its process for maintaining services procured through GSA’s contracts. However, we reviewed the information SSA provided and concluded that it did not include complete information on the sites where each service was provided, limiting the agency’s ability to plan for transition tasks requiring the physical presence of staff. Further, while the agency had procedures to update inventory information on local and long distance services, it did not have similar procedures to update information on other services ordered from the GSA contracts, such as wireless (cellular), satellite, fixed data, and collaboration services. We, therefore, continue to believe that it is important for SSA to complete these steps. Additionally, SSA disagreed with our recommendation to identify legal expertise and utilize a structured transition management approach. The agency indicated that it had previously identified legal expertise in its transition plan. Although legal expertise was not discussed in the plan that the agency initially provided for our review, SSA provided an updated plan subsequent to its comments that included this information. Thus, we revised our report to reflect that the agency had completed this activity. We also deleted the reference to this activity in our recommendation. Further, in commenting on the second activity discussed in this recommendation, SSA stated that the telecommunications transition was part of a broader modernization effort which was subject to agency guidance that includes the use of configuration and change management. However, the agency did not provide evidence to substantiate this position. As a result, we stand by this aspect of our recommendation. SSA’s comments are reprinted in appendix VI. In e-mail comments, USDA’s Senior Advisor stated that the Department agreed with our five recommendations. In e-mail comments, an official in DOT’s Office of the Secretary stated that the Department concurred with our five recommendations. Finally, we received technical comments from GSA and USDA, which we incorporated, as appropriate. We are sending copies of this report to the appropriate congressional committees, the Administrator of GSA, Chairman of the SEC, Commissioner of SSA, Secretary of Agriculture, Secretary of Labor, Secretary of Transportation and other interested parties. In addition, this report is available at no charge on the GAO website at http://www.gao.gov. Should you or your staffs have any questions on information discussed in this report, please contact Carol Harris at (202) 512-4456 or Harriscc@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix VII. Our objectives were to determine to what extent: (1) GSA’s plans and guidance to agencies for transitioning to EIS incorporate lessons learned from prior transitions, and (2) selected agencies are following established planning practices for their transitions. To determine the extent to which GSA’s plans and guidance to agencies for transitioning to EIS incorporate lessons learned from prior transitions, we first obtained and reviewed GSA’s documented lessons learned for the FTS2000 to FTS2001 and FTS2001 to Networx transitions. Second, we identified (with input from GSA) those lessons learned that were specific to agencies. Third, we reviewed transition plans, guidance, and other EIS documentation developed by GSA, including presentations, meeting minutes, and projected timelines provided to agencies. Fourth, we evaluated each of the planning and guidance sources that GSA disseminated to agencies to evaluate how completely the lessons learned were addressed in the guidance. We did this by comparing the key concepts identified in each lesson learned to the concepts described in the guidance. Based on our assessment, we classified the status of a lesson learned as “fully addressed” if the lesson learned appeared in at least one planning and guidance source or if all of the concepts described in a practice were found collectively in multiple guidance sources; “partially addressed,” if part of the lesson learned appeared in at least one document; or “not addressed,” if the lesson learned did not appear in any of the planning and guidance sources. To determine the extent to which federal agencies are following established transition planning practices, we selected five agencies for review. Using Networx billing data provided by GSA, we identified total charges for each service and each of 96 agencies for fiscal year 2015. We first identified the four services with the most fiscal year 2015 spending. We then selected agencies representing a variety of (1) agency sizes (two large agencies, two medium agencies, and a small agency); (2) varying agency structures (e.g., two agencies with components vs. three agencies without); and (3) agency charges for the four most commonly identified services: Voice Services, Toll Free Service, Private Line Services, and Combined (Local and Long Distance) at $20 million dollars for two large agencies, $3 million dollars for two medium agencies, and $557 thousand for a single small agency. The resulting five departments and agencies’ selected for review were the U.S. Department of Agriculture; U.S. Social Security Administration; Department of Transportation; Department of Labor; and U.S. Securities and Exchange Commission. Because we did not review a statistically representative sample of federal agencies, we cannot conclude that our results represent the entire federal government’s level of preparation. However, the five cases studied illustrate various challenges that these agencies have faced in planning for the transition to EIS. To determine the extent to which the selected agencies have made adequate preparations for their upcoming transitions, we obtained and reviewed agency documentation, including strategic plans, telecommunications inventories, and transition-related documentation, and interviewed agency officials. We then assessed this information against each of the activities within the five transition planning practices identified in our prior report on agency transition planning. Based on our assessment, we classified the status of agency transition planning efforts to address each sound planning practice activity as “fully implemented,” if the agency had fully implemented the practice activity or “not implemented,” if the agency did not demonstrate that it had taken any actions consistent with the activity. We assigned a status of “partially implemented” if the agency had taken some, but not all of the actions included in an activity; had begun the processes to fully implement the activity; or had approved plans to fully implement the activity at a later time. As part of our review of agency efforts to establish telecommunications inventories, we gathered copies of the inventories and assessed their reliability. Specifically, we asked agencies for documentation of their quality control procedures and practices related to ensuring their accuracy. Additionally, we also interviewed knowledgeable agency officials about the systems and processes in place to collect and verify the data. We determined that the inventory information provided by the Securities and Exchange Commission was sufficient for our purposes, but the information provided by the other agencies was not due to the lack of documented procedures to ensure the completeness and accuracy of the data. This conclusion was considered during our assessment of their efforts to apply the planning practice related to inventories. We conducted this performance audit from January 2016 through August 2017, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Based on experiences during previous telecommunications transitions, the General Services Administration (GSA) identified 96 lessons learned. Specifically, it identified 28 lessons learned documented during the transition to FTS2001 and 68 lessons learned documented during the transition to Networx. The combined 96 lessons learned relate to transition planning, execution, and monitoring; regional services, reporting, and risk management, among others. Of the total 96 lessons, GSA identified 35 that specifically relate to actions that federal agencies should take. Table 7 describes the 35 agency-focused lessons learned identified by GSA during previous telecommunication transitions and the extent to which each was addressed in GSA’s EIS transition plans and guidance. In addition to the contact named above, James R. Sweetman, Jr., (Assistant Director), Kendrick Johnson (Analyst in Charge), Mathew Bader, Paris Hawkins, Sukhjoot Singh, and Priscilla Smith made key contributions to this report.
GSA is responsible for contracts providing telecommunications services for federal agencies. Transitions involving previous contracts faced significant delays resulting in increased costs. Because GSA's current telecommunications program, Networx, expires in May 2020, planning for the next transition has begun. GAO was asked to review preparations for the transition. This report addresses the extent to which (1) GSA's plans and guidance to agencies incorporate lessons learned from prior transitions, and (2) agencies are following established planning practices in their transitions. In performing this work, GAO analyzed GSA lessons learned and transition guidance. GAO also selected five agencies—USDA, DOL, DOT, SEC, and SSA—based on size, structure, and Networx spending. GAO then reviewed the agencies' documentation to determine how they followed five planning practices identified in previous GAO reports. The General Services Administration's (GSA's) transition guidance to agencies addressed roughly half of its previously identified lessons learned. GSA identified 35 lessons learned from previous telecommunications contract transitions that identify actions that agencies should take. In transition guidance released to agencies, GSA fully addressed 17 of the 35 lessons. Two lessons from previous transitions were not appropriate for the current transition. GSA partially addressed an additional nine lessons. Seven lessons were not addressed at all (see figure). For example, GSA's guidance did not address the previous lesson that agencies should not assume that a transition to a new contract with the same vendor will be easier than a change in vendors. By not including all lessons learned in its plans and guidance to agencies, GSA limits agencies' ability to plan for actions that will need to be taken later in the transition. As a result, agencies face an increased risk that they could repeat prior mistakes, including those that could result in schedule delays or unnecessary costs. Selected agencies—the Departments of Agriculture (USDA), Labor (DOL), and Transportation (DOT); the Securities and Exchange Commission (SEC), and the Social Security Administration (SSA)—have yet to fully apply most of the five planning practices previously identified by GAO as key to a successful telecommunications transition. The practices encompass: (1) developing inventories, incorporating strategic needs into transition planning, (2) incorporating strategic needs into transition planning, (3) developing a structured transition-management approach, (4) identifying resources necessary for the transition, and (5) establishing transition processes and measures of success. SEC fully implemented one practice, partially implemented three practices, and did not implement another. The other four agencies partially implemented each of the five practices. Agencies provided various reasons for not following planning practices, including uncertainty due to delays in GSA awarding the new contracts, plans to implement practices later as part of established agency procedures for managing IT projects, and a lack of direction and contractor assistance from GSA. If agencies do not fully implement the practices in the next transition, they will be more likely to experience the kinds of delays and increased costs that occurred in previous transitions. GAO recommends that GSA disseminate guidance that includes all agency-directed lessons learned. In addition, GAO recommends that USDA, DOL, DOT, SEC, and SSA complete adoption of the planning practices to avoid schedule delays and unnecessary costs. Five agencies agreed with all of our recommendations. SSA agreed with two recommendations, partially disagreed with one, disagreed with two, and provided updated information. GAO stands by the recommendations, as discussed in the report, and revised the report based on SSA's new information.
In fiscal year 1998, the Congress appropriated $25 billion in discretionary budget authority for the Department of Housing and Urban Development’s (HUD) programs, which represented a 30-percent increase over the fiscal year 1997 level of $19.3 billion. HUD provides rental housing assistance-about $21 billion in fiscal year 1996—that enables about 4.7 million low-income households to obtain safe, decent, and affordable housing. HUD assists about two-thirds of these households through its Section 8 housing assistance program. HUD’s estimate of new budget authority to renew expiring Section 8 tenant-based and project-based contracts increases from $3.6 billion in fiscal year 1997 to $18.1 billion in fiscal year 2002. For the tenant-based portion of the Section 8 program, HUD provides housing subsidies through nearly 4,300 contracts with local housing agencies and state housing finance agencies. Housing agencies receive a fee from HUD for administering the tenant-based program and working with households to determine eligibility, verify income, and ensure that units meet quality standards. Because a significant number of the Section 8 tenant-based contracts will expire over the next 5 years, the estimated cost to renew contracts in the tenant-based program alone will rise from about $2.5 billion in fiscal year 1997 to $10.5 billion in fiscal year 2002. To identify unexpended funding that could offset this growing renewal cost, HUD recently undertook aggressive efforts to reconcile its records in the tenant-based program. Tenant-based assistance is an important part of the federal government’s commitment to providing safe, decent, and affordable housing to low-income people. In fiscal year 1996, HUD spent about $7 billion to provide tenant-based rental assistance. HUD’s tenant-based assisted housing programs—the Section 8 certificate and voucher programs—provide direct rental assistance to about 1.4 million households. These programs are designed to allow low-income households to live in decent and affordable private rental housing of their choice, as long as the units meet HUD’s rent and quality standards. Generally, under the certificate program, an assisted household pays 30 percent of its adjusted income for rent. In contrast, households with vouchers may elect to pay more or less than 30 percent of their income for rent, depending on the rent charged for the unit in which they elect to live. In turn, the voucher program assists the household by making a subsidy payment to the landlord equal to the difference between 30 percent of the household’s adjusted income and a payment standard for the housing unit based on the fair market rent for a unit of a similar size in the area. To operate the certificate and voucher programs, HUD enters into contracts with local and state housing agencies, including public housing agencies. These housing agencies certify applicants for eligibility, inspect units found by the tenant for compliance with housing standards, and verify that the lease terms meet HUD’s requirements. In addition, the housing agencies pay the rent subsidies to owners of private rental housing for the assisted households. HUD also pays the housing agency a statutorily determined administrative fee for managing the program. When the Section 8 program began in 1974, HUD entered into long-term contracts with housing agencies to provide tenant-based assistance. Initially, contract terms for tenant-based assistance were for 15 years. As the federal budget deficit grew larger, HUD reduced contract terms to reduce the amount of budget authority it needed to set aside to fully fund the contracts over their whole terms. HUD subsequently shortened the contract terms to 5 years and, later, to 3 years. Finally, beginning in fiscal year 1995, Section 8 tenant-based contracts were written for 1-year terms, thus minimizing the amount of budget authority HUD needs to fund contract renewals. HUD’s actions have resulted in contract lengths ranging from as long as 15 years to as short as 1 year. HUD estimates that the cost of Section 8 contract renewals for the tenant-based program alone will increase from $2.5 billion in fiscal year 1997 to $10.5 billion in fiscal year 2002—nearly a fivefold increase in the amount of budget authority needed to fund Section 8 tenant-based contract renewals. The increase in budget authority is attributable to the large growth in the number of expiring Section 8 tenant-based contracts. This growth reflects the renewal of the initial expiration of long-term contracts, as well as the re-renewal of shorter term contracts begun in the 1990s. To offset the budget authority requirements for Section 8 contract renewals, the Congress encouraged HUD in 1995 to begin using available unexpended budget authority that had accumulated over the years in the housing agencies’ program reserve accounts to extend the funding of expiring Section 8 contracts. The Congress provides budget authority for HUD’s Section 8 tenant-based assistance to (1) renew expiring contracts to maintain existing subsidies (called contract renewals), (2) create new contracts to increase the number of assisted households (called incremental assistance), and (3) provide additional funds for existing contracts when the remaining contract funds are insufficient to pay subsidies over the remaining life of the contract (called contract amendments). Renewal assistance represents the vast majority of Section 8 tenant-based assistance. HUD and the Congress have worked together during recent years to renew every expiring contract. However, renewing these contracts will require a sharp increase in budget authority over the next few years because the number of expiring contracts is increasing dramatically. The sharp growth in expirations has two causes. First, the number of contracts expiring for the first time will increase sharply in the coming years. These consist of 15-year contracts issued in the late 1970s and early 1980s and short-term (5 years or less) tenant-based contracts issued since the early 1990s. Second, HUD must renew an increasing number of contracts that it has renewed at least once previously, a circumstance that has begun to occur more frequently as contract terms have grown shorter. Renewing expiring contracts will require a sharp increase in budget authority over the next several years, as shown by figure 1.1. In 1995, the Congress encouraged HUD to use available unexpended funds in the Section 8 program to offset the budget authority requirements for contract renewals. Budget authority appropriated for Section 8 tenant-based contract renewals is “no-year” money and does not expire if it is not expended. These unexpended funds have been obligated to the housing agencies but will not be needed to meet planned requirements; the funds are, in effect, credited to housing agencies’ program reserve accounts. In fiscal year 1995, HUD began to draw on unused budget authority from previous years to extend the terms of expiring contracts. To identify unexpended balances to help offset the cost of Section 8 contract renewals, HUD began in February 1996 an extensive examination, called “reconciliation,” of the Section 8 tenant-based program’s reserve accounts at all housing agencies. The results of this examination were not completed until 1997. In addition, in November 1996 HUD identified approximately $1.6 billion that had not been obligated to the tenant-based program—and therefore was also unexpended. HUD called this amount “carryover” and used it to offset the Department’s fiscal year 1998 contract renewal needs. Identifying this carryover occurred too late in the budget process to have an impact on HUD’s fiscal year 1997 budget request. Therefore, HUD reflected it in the Department’s fiscal year 1998 budget request by reducing the estimate of Section 8 tenant-based contract renewals by $1.6 billion. At the request of the Chairman of the Subcommittee on VA, HUD, and Independent Agencies, House Committee on Appropriations, we reviewed HUD’s fiscal year 1998 budget request for Section 8 contract renewal funding. In February 1997, we briefed the Subcommittee and provided testimony for the Subcommittee’s hearing on March 18, 1997. We informed the Congress that HUD had a significant amount of unexpended funds in the Section 8 tenant-based program and that once HUD completed its examination of the housing agencies’ accounts, the likely total amount of available unexpended funds would far exceed the $1.6 billion that HUD disclosed in its fiscal year 1998 budget request. In chapter 2, we discuss HUD’s actions to further identify the unexpended budget authority in the tenant-based program. The accumulation of unexpended funds in housing agencies’ reserve accounts resulted from HUD’s method of estimating budgets since the beginning of the program. According to HUD officials, at the outset of the Section 8 tenant-based program, HUD intentionally established program reserves during the early years of a multiyear housing assistance contract to help fund the program in the later years. This practice was, in part, required by law. To build up reserves, HUD based its estimate of the amount of budget authority needed to fund the program on two conservative assumptions. First, HUD assumed that tenants would make a contribution of zero toward their rent. Therefore, as tenants had income and contributed to their rent the amount of budget authority the housing agency drew down from the program was less than budgeted, and program reserves began to grow. Second, HUD assumed that all certificates were in constant use, even though the leasing of housing units would necessarily take some time to accomplish while prospective tenants shopped for housing and the housing agencies determined the tenants’ eligibility and ensured that the selected units met quality standards. Thus, reserves accumulated during the time housing units were not leased because housing agencies did not make housing subsidy payments for unleased units. As a result, the larger housing agencies, especially those receiving new certificates every year, developed significant reserves. While HUD began to consider tenants’ income in its contract renewal budget requests for fiscal year 1991, it continued to assume that all units were fully leased. For instance, currently, a statutory requirement exists that once a household discontinues its need for and use of a certificate or voucher and returns it to the housing agency, the housing agency must wait 3 months before reissuing that certificate or voucher to another eligible household. However, HUD does not factor the effect of this requirement into its budget estimates for Section 8 tenant-based assistance. Therefore, during this 3-month period, reserves accumulate in the housing agencies’ reserve accounts because the agencies do not make housing assistance payments to landlords for housing units not under lease. The build-up of reserves in HUD’s Section 8 tenant-based program is indicative of long-standing problems with HUD’s budget estimating process. According to HUD’s Office of Inspector General (OIG), HUD, for years, has been unable to estimate accurately the budget authority it needs for Section 8 contract renewals and amendments. Historically, HUD’s accounting and information systems did not contain reliable, complete, or accurate data on Section 8 contracts. This situation occurred because of poor systems design and serious deficiencies in the controls, policies, and procedures associated with the input and maintenance of Section 8 contract and accounting data. As a result of these problems, HUD had to continually revise its Section 8 estimates and often request additional funding. While HUD has taken action to correct these problems, budget estimating problems still remain. In response to congressional concerns about HUD’s budget estimating problems, HUD and the Office of Management and Budget (OMB) formed a joint team to evaluate HUD’s fiscal year 1992 contract renewal and cost amendment estimates and to find ways to improve the process in the future. The team determined that HUD was unable to accurately estimate Section 8 contract renewal and amendment needs because HUD’s data systems were inadequate for the timely retrieval of accurate information. Furthermore, the team reported that estimating the number of expiring Section 8 contracts—and the budget authority required to renew them—had been a recurring problem for HUD since 1989. For example, HUD had to re-estimate its contract renewal needs for fiscal years 1990 and 1991 because of inadequate financial management systems and inaccurate forecasting. Specifically, HUD’s financial management systems did not provide summary information to determine the number of expiring Section 8 contracts. In addition, HUD’s cost estimates for contract renewal were based on assumptions about average costs that proved to be inaccurate. In 1992, HUD’s OIG reported that the Department continued to experience problems in submitting reliable Section 8 budget requests to the Congress.Specifically, the OIG concluded that serious deficiencies existed in (1) the controls and procedures in HUD’s Section 8 accounting and budgeting systems and (2) the input and maintenance of contract and accounting data in the Department’s information systems. As a result, HUD could not assure the Congress that its Section 8 budget requests for fiscal years 1992 and 1993 were reasonably accurate. Because HUD’s management relied on the Department’s inadequate Section 8 financial management systems to develop the budget requests and because the estimate of the cost to fund expiring contracts turned out to be inaccurate, HUD had to increase its Section 8 contract renewal estimates for fiscal years 1992 and 1993. In addition, the OIG reported that HUD’s original estimates for Section 8 amendments for fiscal years 1992 and 1993 may have been materially overstated. However, the OIG believed that the accuracy of the Department’s tenant-based contract renewal estimate for fiscal year 1993 appeared improved over the fiscal year 1992 estimate. To determine HUD’s progress in improving its Section 8 budgeting systems and processes, HUD’s OIG conducted a follow-up audit in 1995. The OIG found that HUD’s program offices had developed and implemented interim budgeting procedures that had improved the Department’s ability to formulate Section 8 contract renewal budget estimates. Nevertheless, the OIG found that HUD continued to experience problems developing accurate and reliable Section 8 contract renewal and amendment estimates. For example, the OIG found that because of a breakdown in the budgeting process, the Department’s budget office did not use more reliable estimates developed by the program offices for the Department’s initial fiscal year 1996 budget submission to OMB. The OIG also concluded that HUD’s fiscal year 1994 amendment estimate was materially overstated and believed that the estimates for fiscal years 1995 and 1996 also appeared to be overstated. In response, the Assistant Secretary for Public and Indian Housing pointed out that the tenant-based program had included a “cushion” in its amendment estimates to cover shortfalls in budget authority that could not be estimated by the Department’s systems. To help correct the deficiencies with its accounting and budgeting for Section 8 contracts, HUD implemented a new Section 8 tenant-based information system in fiscal year 1995. Besides containing primary information for estimating contract renewal needs, the system contains the actual cost incurred by each housing agency for providing rental assistance. The system also provided the amount of unspent budget authority credited to each housing agency at the end of the agency’s fiscal year. The Chairman of the Subcommittee on VA, HUD, and Independent Agencies, House Committee on Appropriations, has expressed concern about HUD’s financial management of the Section 8 tenant-based program—specifically, HUD’s lack of timeliness and precision in identifying the magnitude of unspent budget authority in the Section 8 tenant-based program. As a result, the Chairman asked us to review HUD’s financial management of the Section 8 tenant-based program. In addition, the 1997 Emergency Supplemental Appropriations Act (P.L. 105-18) directed us to determine whether HUD’s systems for budgeting and accounting for Section 8 rental assistance ensure that unexpended funds do not reach unreasonable levels and that obligations are spent in a timely manner. Our objectives for this report, therefore, were to evaluate the accuracy of HUD’s estimate of its unexpended funds in the Section 8 tenant-based program and the reasonableness of this amount and assess HUD’s budget formulation process for the Section 8 tenant-based program. To evaluate the accuracy of HUD’s estimate of its unexpended funds in the Section 8 tenant-based program and the reasonableness of this amount, we reviewed documentation and discussed HUD’s examination of unexpended balances in housing agencies’ reserve accounts with HUD officials. We reviewed and discussed the results of Price Waterhouse LLP’s independent evaluation of available unexpended funds in the tenant-based program with officials from Price Waterhouse LLP and with HUD officials in the Offices of Public and Indian Housing, the Chief Financial Officer, and Inspector General. We contacted national associations representing housing agencies and discussed the need for retaining unexpended funds in housing agencies’ program reserve accounts. These organizations were the Council of Large Public Housing Authorities, the National Association for Housing and Redevelopment Officials, and the Public Housing Authority Directors Association. To assess HUD’s budget formulation process for the Section 8 tenant-based program, we reviewed HUD’s process for developing the contract renewal estimates and evaluated supporting documentation for HUD’s fiscal year 1998 budget request. We also reviewed federal laws and regulations, OMB policies, and HUD’s guidance. We analyzed the impact that HUD’s budgeting processes had on its fiscal year 1998 budget submission to the Congress. In addition, we discussed programmatic, budgeting, and financial management issues with HUD officials from the Offices of Public and Indian Housing, Budget, the Chief Financial Officer, Policy Development and Research, and Inspector General. While we did not systematically verify the accuracy of HUD’s data or conduct a reliability assessment of HUD’s databases as part of this assignment, we relied upon the work of HUD’s OIG and our review of the OIG’s audit of the consolidated financial statement, which shows that the information in HUD’s tenant-based information system is generally reliable. As part of its audit of HUD’s fiscal year 1996 financial statements, HUD’s OIG analyzed a statistical sample of HUD’s contracts. For those contracts that were Section 8 tenant-based contracts, the OIG traced the information in the original contract files first to HUD’s Department-wide accounting system and then to HUD’s Section 8 tenant-based information system. The OIG concluded that, for the sample reviewed, the amounts reserved and obligated in the Section 8 tenant-based information system were correct.As part of our audit of the federal government’s consolidated financial statement, we selected a subset of this sample and performed similar tests at two field offices. On the basis of these tests, we concurred with the OIG’s findings. We provided a draft of this report to HUD for review and comment and we address HUD’s comments at the end of each applicable chapter. We performed our work from May 1997 through December 1997 in accordance with generally accepted government auditing standards. Over the more than 20 years that HUD has provided housing assistance through the Section 8 tenant-based program, approximately $9.9 billion of budget authority excess to program needs has accumulated in housing agencies’ reserve accounts. This is funding that housing agencies received under contracts with HUD but did not expend because the funding was not needed as planned to make housing assistance payments to landlords on behalf of low-income families. After HUD reported this large unexpended balance, the Congress rescinded $4.2 billion, and after other adjustments of about $2.2 billion, the current balance is about $3.5 billion, which remains in a congressionally established Section 8 Reserve Preservation Account. To identify this programwide unexpended balance, HUD conducted in 1996 and 1997 a financial data reconciliation of all of its tenant-based housing assistance contracts. Until completing the reconciliation process and making recent improvements to its information system, the Department could not accurately report its excess balances in the tenant-based program. However, with improved systems and better data, HUD has the opportunity to report in more detail its unexpended Section 8 funding and the potential availability of this funding to offset needs for new budget authority or for other uses. In March 1997, HUD completed an extensive accounting reconciliation and data verification process that it had begun in February 1996. With the results of the reconciliation, HUD updated its Section 8 tenant-based information system and subsequently determined that the unexpended budget authority in the tenant-based program was $20.7 billion and that $9.9 billion of that amount was not needed to meet current program needs. Because this $9.9 billion of excess budget authority was not needed to fund current obligations, it therefore was available to meet future Section 8 or other needs. HUD recaptured $7.7 billion of this excess balance and retained the difference of $2.2 billion to cover contingencies and to account for future transactions. The $9.9 billion that had accumulated as excess exceeded the $7.4 billion that housing agencies expended on payments to landlords during fiscal year 1996. Figure 2.1 provides a chronology of the actions taken by the Congress and HUD to identify and control Section 8 tenant-based funding. In May 1997, HUD estimated that the amount of unexpended budget authority that exceeded the amount needed to meet contract requirements in the tenant-based program was $9.9 billion. This was HUD’s second attempt to estimate this figure; the Department revised its initial estimate after an independent accounting firm determined that critical data used could not be verified because they were maintained manually at HUD’s field offices. HUD’s revised estimate of $9.9 billion has been verified by an independent accounting firm. In contrast to the first estimate, HUD calculated its revised estimate by using data exclusively from its Section 8 tenant-based information system to compare total unexpended budget authority with the housing assistance requirements for that budget authority. By subtracting projected program requirements of $10.2 billion and other adjustments of $0.6 billion from the total unexpended budget authority of $20.7 billion, HUD concluded that the amount of unexpended budget authority that was not needed to meet existing housing assistance needs at the beginning of fiscal year 1998 would be $9.9 billion, as shown in table 2.1. Price Waterhouse LLP assisted HUD in evaluating its revised estimate and determined the estimate to be accurate. During its work, Price Waterhouse LLP performed tests on a statistical sample of 158 housing agencies to confirm the accuracy of HUD’s $9.9 billion estimate of excess unexpended budget authority. For this sample, the accounting firm compared information in HUD’s tenant-based information system with information in the housing agencies’ most recent year-end settlement statements. By extrapolating the results from the test sample to the information system as a whole, Price Waterhouse LLP found that the totals differed from HUD’s by 5 percent or less and on that basis concluded that HUD’s estimate of $9.9 billion was accurate. In the June 1997 Emergency Supplemental Appropriations Act, the Congress directed HUD to recapture unexpended budget authority that was not needed to meet the current obligations of the Section 8 tenant-based program. In the act, the Congress also established the Section 8 Reserve Preservation Account as an accounting repository for recaptured excess budget authority. In response to this direction, HUD recaptured $7.7 billion of the excess budget authority and placed it in the Preservation Account. Of the $2.2 billion not recaptured, about $1.2 billion was left in the participating housing agencies’ accounts as a reserve for contingencies equal to about 2 months of assisted housing payments to landlords. The remaining $1 billion was not recaptured because it represented amounts that had not yet been credited to housing agencies’ reserve accounts at the time of the recapture. As shown in table 2.2, after two congressional rescissions totaling $4.2 billion, GAO has calculated on the basis of data obtained at the end of fiscal year 1997 that about $3.5 billion remains in the Reserve Preservation Account. (However, more recent financial information maintained by HUD shows that this balance may be closer to $3.7 billion.) While a reserve for contingencies is prudent, it is not clear that a reserve of $1.2 billion is reasonable and necessary. A report from HUD’s tenant-based information system shows that, in fiscal year 1996, housing agencies used $353 million in excess budget authority to cover contingencies, far less than the amount that HUD has reserved for this purpose. Moreover, during fiscal year 1996, an additional $1.4 billion in excess budget authority accrued. HUD plans to adjust its reserve level after it examines in more detail housing agencies’ actual use of available unexpended budget authority in fiscal years 1996 and 1997. However, given housing agencies’ experience in fiscal year 1996, much less than $1 billion likely will be needed to meet unanticipated costs during a 1-year period. April 1997 was the first time that HUD identified and reported to the Congress the excess unexpended budget authority associated with the Section 8 tenant-based program. Until HUD’s recent improvements to its information system, such reporting could not be done accurately. However, better data and improved systems now offer HUD the opportunity to use its (1) budget justification materials and (2) financial statements as a means to report the status of unexpended funds and their availability to offset needs for new budget authority or for other uses. HUD is not required to and does not currently report in its annual budget justifications the aggregate amount of excess unexpended budget authority credited to housing agencies’ reserve accounts. By doing so, however, the Department could ensure that the Congress is informed about the funding on hand before appropriating new budget authority. Moreover, according to OMB’s guidance on budget formulation, agencies should consider available funding on hand before requesting new funding. A second means for HUD to improve its reporting of excess unexpended budget authority in the Section 8 tenant-based program is through its financial statements. HUD’s consolidated financial statements comply with the federal accounting requirement to disclose unexpended budget authority by major budget account—the entire Section 8 program, for example. However, the statements do not currently show the amount of unexpended budget authority for programs that are at the level of the tenant-based and project-based Section 8 programs or whether unexpended budget authority is needed to meet program requirements or is available for other purposes. The federal accounting standards established by the Financial Accounting Standards Advisory Board require that agencies disclose the status of budgetary resources, including the amount obligated. The standards do not, however, require agencies to disclose such information below the major budget account level. Each agency should disclose information that is most useful to the users of their financial statements. In a note to its fiscal year 1996 consolidated financial statements, HUD disclosed unexpended appropriations by major program type. The note explained that unexpended appropriations include obligated, committed, and reserved as well as excess funds. It further said that HUD had unexpended appropriations of $43 billion in the Section 8 program (tenant-based and project-based) at the end of fiscal year 1996. While the note fulfills the advisory board’s requirement to report on the status of budgetary resources, the note does not identify the portion of the $43 billion attributed to the two Section 8 assisted housing programs or the amount in excess of the programs’ needs. By reporting excess budget authority in the two programs in its consolidated financial statements, HUD would instill greater confidence in the accuracy of these balances because they also would be reviewed as part of the annual consolidated financial statement audit required by the Chief Financial Officers Act of 1990. Moreover, clearly identifying the existence and amount of excess unexpended budget authority is important if the Congress is to have confidence in HUD’s capacity to effectively manage the funding provided for the Section 8 tenant-based program. We believe that to adequately address economic contingencies—such as rising rental rates or falling tenant incomes—HUD should maintain a reasonable level of excess budget authority; however, excess budget authority that exceeds a full year of housing assistance payments is excessive. To ensure that excess unexpended budget authority does not reach unreasonable levels, HUD would need to annually review each tenant-based housing assistance contract it has with housing agencies with the intent of recapturing amounts above the level prudently needed to cover the unexpected but potential costs of administering the contract. Furthermore, now that HUD has corrected the data in its tenant-based information system, the Department has several means—including its financial statements and budget submissions—to keep the Congress better informed in the future of the amount of excess unexpended budget authority in the Section 8 tenant-based program. To improve HUD’s fiscal responsibility to the Section 8 program and to ensure that the Congress is adequately informed about the amount of excess unexpended budget authority at HUD in the future, we recommend that the Secretary of HUD direct the Office of the Chief Financial Officer to modify the agency’s consolidated financial statements so that they (1) identify the portions of the unexpended appropriations for the Section 8 program that accrued during the year and are attributable to the tenant-based and project-based programs, respectively and (2) disclose the amounts of budget authority in each program that are excess to current needs and therefore available for other uses; include in HUD’s annual budget justification documents the amount of unexpended budget authority in the Section 8 assisted housing program that is in excess of current obligations and recapture amounts that accumulate above what is prudently needed to address contingent costs. In commenting on a draft of this report, HUD’s Chief Financial Officer said that HUD agreed with the report’s major findings, conclusions, and recommendations. In addition, HUD’s Office of Public and Indian Housing said that the report presents a balanced assessment. The CFO and officials of the housing office provided several comments to improve the report’s clarity, and we incorporated them as appropriate. Accurate budget estimates are essential to federal agencies meeting their fiscal responsibilities because such estimates facilitate sound policy decisions and effective funding trade-offs. In support of agencies being fiscally responsible, OMB requires them to submit reasonably accurate budget estimates. However, HUD has long-standing problems in submitting accurate estimates—since 1989, its estimates of Section 8 contract renewals have been either too low or too high. This inability to accurately forecast budget needs persisted into fiscal year 1998. We found that HUD had problems with its budget submission; but we also found that HUD had corrective actions planned or in process to improve its budgeting process. Specifically, we found the following: The budgeting process HUD used in fiscal year 1998 produced excessive estimates of key cost factors that, once discovered, led to HUD’s reducing its request for tenant-based contract renewal funding by about $1 billion. In its budget projection for fiscal years 1999 through 2002, HUD overestimated its need for funding to amend existing housing assistance contracts because accurate data were not available from its accounting system at the time. HUD has acknowledged many of its problems with its budgeting process and has begun implementing corrective actions that include changing its organizational structure to improve oversight among the staff responsible for formulating budget estimates. However, many of the changes HUD is making or has planned were not implemented in time to affect HUD’s initial formulation of its fiscal year 1999 estimate, and HUD has not prepared a timetable for implementing these changes. HUD’s fiscal year 1998 budget request contained errors and insupportable estimates that led to HUD’s overstating funding needs for its tenant-based contract renewals by over $1 billion. This error was caused by an ineffective internal budget process that lacked adequate oversight and did not make effective use of actual expenditure data for the program. For example, insufficient review of the estimating methodology led to double-counting a large component of the average cost per assisted housing unit. Because this cost is a key variable in determining HUD’s contract renewal needs, the double-counting caused HUD to greatly overstate its estimate for renewing expiring contracts. In addition, HUD’s estimate contained contingency costs that could not be justified on the basis of program experience. In its fiscal year 1998 budget submission of February 1997, HUD used a value of $6,386 as the average unit cost for renewing tenant-based housing assistance contracts. Although this value is based on the program’s actual expenditure data for fiscal year 1996, it also includes several supplementary amounts for administrative fees paid to housing agencies, contingent or unexpected costs, and increased program expenditures caused by residents losing their welfare assistance in 1997 and 1998. However, we and HUD determined that adding these three amounts to the average unit cost either could not be justified or was not necessary. For the first amount—the administrative fee—HUD officials had already included this fee in the baseline unit cost; adding it again resulted in double-counting it. Specifically, the program’s fiscal year 1996 expenditure data that HUD obtained from the accounting system represented the total cost to HUD of providing rental assistance and, therefore, necessarily included the administrative fee. However, to develop the final fiscal year 1998 average unit cost, HUD added the fee again, resulting in raising the contract renewal estimate by approximately $700 million. We found, and program officials agree, that better coordination and oversight among the officials in the program office, the office of the comptroller, and the departmental budget office could have prevented this error. For example, the program office’s comptroller reviewed the actual disbursement data obtained from the accounting system but did not review the final average unit cost calculation until after HUD submitted its budget to the Congress. Moreover, departmental budget officials accepted the program office’s estimate without an independent review of the added costs and underlying basis for the estimate. The second supplementary amount was for covering unknown costs or contingencies. For this supplement, HUD added approximately $204 to the unit cost, or 2 weeks of disbursements. However, at the time that HUD developed this estimate, almost all housing agencies participating in the tenant-based program already had individual reserve accounts equal to at least 2 months of disbursements. These reserves could be used to cover contingencies such as rent increases and decreases in tenants’ income. Section 8 program officials stated that they added the 2-week reserve as another safeguard against the risk that families might lose rental assistance because of unexpected increases in program costs. The officials said, however, that they could not determine whether housing agencies actually needed additional funding or were using available reserves for unanticipated costs. They said that because the tenant-based information system could give them only 1 year’s worth of complete and reliable information on the use of reserves, a sufficient basis did not exist for making informed decisions about the need for contingency funding. The third supplementary amount that HUD used to develop its fiscal year 1998 unit cost was for mitigating the anticipated impact of welfare reform on Section 8 costs. For 1998, HUD valued the welfare supplement at $138 per unit ($46 in 1997 and $92 in 1998). However, after submitting its budget estimate to the Congress in February 1997, HUD determined that this amount was unnecessary. HUD found that its assumption that housing agencies would begin to feel a significant impact from welfare reform starting in 1997 was not borne out by what was happening across the country. Instead, the states’ early experiences with the impact of welfare reform showed little or no increased cost to the program as a result of the falling incomes of assisted housing residents. As a result, adding a cost factor to address the impact of welfare reform was not necessary. In addition to the inflated unit cost estimate, HUD’s fiscal year 1998 contract renewal estimate contained a line item requesting a contingency allowance of $162 million. Although program officials said that the funding was needed to cover unanticipated costs in the program, they could not provide supporting information to justify their request. Subsequently, HUD adjusted its fiscal year 1998 budget request and removed this request for funding. HUD’s Deputy Assistant Secretary confirmed that because HUD would use historical data as the basis for future budget estimates, HUD would make no future requests for contingency funding for the tenant-based program. As a result of misestimating the unit cost and using cost estimates in its February budget submission that HUD later determined to be unnecessary, in September 1997 HUD proposed—and the Congress accepted—changes in its contract renewal estimate that lowered the average unit cost by approximately 14 percent, from $6,386 to $5,499. As shown in table 3.1, for the 1,265,625 Section 8 housing certificates, vouchers, and moderate rehabilitation units being renewed, this change represented a decrease of $1.123 billion in the budget authority requested by HUD for its tenant-based program. HUD’s revised unit cost estimate produced the most substantial reduction to the original contract renewal estimate. According to HUD’s Deputy Assistant Secretary responsible for the tenant-based program, HUD developed the revised unit cost using only the program’s historical expenditure data from the tenant-based information system. She also stated that in the future HUD would not supplement the average unit cost with additional amounts even if they made sense from a policy standpoint unless the supplementary amounts could be supported with historical or other data. Two of these supplements were an amount to reflect the impact of welfare reform and an amount to cover contingent costs. An official from HUD’s Office of Policy Development and Research stated that the amount to reflect welfare reform’s impact was removed from the average unit cost because states’ early experiences with welfare reform did not show an increased cost to the program. As part of its fiscal year 1998 budget request, HUD predicted an annual need of $150 million for fiscal years 1998 through 2002 to amend its contracts with public housing agencies that administer its Section 8 tenant-based and moderate rehabilitation assisted housing programs. Generally, amending contracts refers to the process of changing specific housing assistance contracts to add more funding. These contracts might need additional funding because the budget authority initially obligated to them—as long ago as 15 years—may not have been sufficient to provide adequate rental assistance over the life of the contract. In addition to the fiscal year 1998 budget request for amendment funding, HUD’s budget submission also predicted that the need for amendments to the tenant-based contracts through the year 2002 would be approximately $600 million. Although HUD has supporting documentation for its need for amendment funding for fiscal year 1998, the Department’s prediction of needing future amendment funding is not consistent with the significant changes HUD has made to its contracting practices. In fiscal year 1995, HUD began reducing the terms for renewed expiring contracts from 3 to 5 years to 1 year. Therefore, under this policy HUD will renew the contracts receiving amendment funding in fiscal year 1998 for 1 year after their expiration. This change to shorter contract terms has made estimating contract renewal needs more certain because changes in housing costs or tenants’ incomes could be predicted more easily over the shorter period. In addition, a HUD official told us that the tenant-based information system more accurately estimates funding needs 1 year at a time and, therefore, greatly lowers the risk of underfunding contracts and could ultimately eliminate the need for amendment funding. They also said most of the tenant-based contracts should have 1-year terms by fiscal year 2003. Therefore, because of the greater certainty about the future costs of a program operating under contracts with mostly 1-year terms, HUD does not appear to need additional funding for tenant-based amendments beyond fiscal year 1998. HUD officials have made or plan to make several important changes to HUD’s information system and organization to address the problems that they and we have identified in HUD’s budgeting process. In response to the fiscal year 1995 Financial Statement Audit prepared by HUD’s OIG, HUD recently enhanced its tenant-based information system and plans changes to related procedures to improve the accuracy of its budget estimates. Also recognizing the need for improving coordination and oversight among the HUD officials involved in preparing and reviewing the budget submission, HUD moved the Office of Budget under the control of the Office of the Chief Financial Officer. HUD officials believe that these improvements will correct past problems and enhance their efforts to more accurately estimate their Section 8 budget needs. HUD is enhancing the Section 8 tenant-based information system by automating and integrating the “reservation pricing” used to estimate the amount of budget authority each housing agency is likely to need annually to operate its tenant-based program. In the past, HUD field offices deducted tenants’ expected contributions to rent (generally about 30 percent of a tenant’s family income) from the local fair market rent to develop an estimate of the cost to HUD of assisting low-income families to live in decent housing during the coming year. However, after obtaining the estimated costs, HUD did not compare these estimates with the actual cost of that assistance for the most recently completed year. As a result, HUD overfunded many tenant-based contracts, and the excess funding contributed to the accumulation of program reserves. By creating a “reservation pricing” subsystem within the information system, HUD now will use the actual historical cost data to evaluate the fair market rents and tenants’ contributions. HUD officials believe that this process will eliminate overfunding and improve the accuracy of the budget estimating for contract renewals. HUD also plans the following additional modifications to the tenant-based Section 8 information system and the budgeting procedures used to estimate contract renewal needs, although our work did not focus on evaluating the potential benefits of these actions: HUD plans to modify its information system to calculate the actual average cost per unit before completing the annual settlement process at each housing agency. This change will verify the reasonableness of the average cost per unit before HUD settles all program costs at the end of the year with the housing agency. In response to the data access problems noted by HUD’s OIG, HUD plans to improve the security over access to the unit tables within the information system and to compare monthly the number of contracted units in the system to the previous month’s total to reconcile any differences. Finally, to maintain better control over the amount of program reserves, HUD plans to no longer extend expiring tenant-based contracts with excess budget authority within the program reserves. HUD has recognized the need for improving coordination and oversight among program, budget, and financial management officials in order to achieve more reliable budget estimates, including estimates of Section 8 contract renewals. HUD’s Management Reform Plan states that the Chief Financial Officer has lacked the ability to link budgeting with strategic planning and financial management because HUD’s budget operations have been fragmented and disjointed, preventing clear accountability and the necessary coordination. As a result, HUD has recently placed all departmental budget operations under the Office of the CFO to ensure that budgeting is integrated with financial management oversight. HUD also is in the process of implementing two changes directly related to the budget estimate. First, all program divisions are hiring a chief financial officer to mirror the operations of the Department’s Office of the CFO. Previously, the program division’s budget director and comptroller reported to a deputy assistant secretary. Under the new structure, the division’s budget director and comptroller will report to the program’s chief financial officer who will coordinate the agency’s CFO and the program office to ensure adequate oversight. However, at the time of our review, a chief financial officer for the tenant-based program had not been hired and the program’s comptroller had been detailed to the Department’s Office of General Counsel. Second, the Office of the CFO is developing a model to analyze all budget submissions, including the contract renewal estimate. Previously, the departmental budget office accepted the cost estimates with only limited review of the supporting documentation that detailed how the estimates were developed. Although the Office of the CFO also plans to develop budget estimating policies and procedures that build in enough time for adequate coordination, oversight, and communication, these plans have not been completed. HUD’s CFO did state, however, that the planned improvements should be operational in time for HUD’s fiscal year 2000 budget submission. In addition, according to HUD’s Director of the Office of Budget, HUD submitted its fiscal year 1999 contract renewal estimate to OMB in September 1997 with limited analysis. He also said that because of time constraints, his office was limited to reviewing the budget estimates for their numerical accuracy and could not question the estimates’ reasonableness or their underlying basis. For example, he stated that the Budget Office was unaware of the program office’s budgeting assumption that all tenant-based certificates are in constant use. This assumption, however, does not reflect the current practice of housing agencies that administer the tenant-based program. As stipulated by current appropriations law, after a certificate is turned in by the current holder, the housing agency must wait 90 days before reissuing it to a new household. But because the budget estimate assumes constant use, the effect is to estimate more funding than will actually be needed although HUD could recapture such excess funding when it analyzes the housing agency’s reserve account at the end of the year. HUD’s Office of the CFO also is leading the effort to improve HUD’s financial management performance by linking HUD’s budget functions with performance measures and program delivery. Specifically, to improve financial management oversight, HUD will consolidate the 10 accounting divisions in HUD’s field offices into one office responsible for all accounting operations. In addition, the Office of the CFO will develop management controls to ensure that employees are accountable for the Department’s fiscal integrity. HUD also has implemented a risk management program, directed by the Office of the CFO, to protect resources from fraud, waste, and abuse and to maintain the agency’s financial integrity. As part of this effort, the CFO has collected all financial management deficiencies identified by HUD’s OIG, GAO, and others and is working to correct these deficiencies within all HUD programs. While these actions appear to be responsive to the problems identified, HUD has no specific time frames for their completion. HUD’s fiscal year 1998 budget process had inadequate oversight of procedures and insupportable estimates or assumptions underlying key values. Although HUD recognizes these problems and has plans to correct them, it is too soon to measure the effectiveness of the remedies because many have not been implemented. More importantly, HUD has a lengthy history of budget estimating problems and faces some uncertainty now about personnel to oversee and make policy for the budgeting function. Therefore, we are concerned with the agency’s ability to sustain such recent corrective actions and implement those it has planned. Furthermore, many of the actions that HUD has taken or plans to take were not completed at the time HUD prepared its initial contract renewal budget estimate for fiscal year 1999 and sent it to OMB in September 1997. We believe that HUD’s Office of the CFO will need to exercise strong leadership to complete these changes and to develop specific budget procedures to guide the new organizational changes. Otherwise, HUD’s fiscal year 2000 budget estimate—which HUD will begin to prepare in May 1998—and future estimates may also misestimate program funding needs. Because HUD has recognized many weaknesses in its budget process for estimating contract renewal needs and has undertaken significant actions to improve its process, we are not making recommendations at this time. However, we will continue to monitor HUD’s budget process and review its fiscal year 1999 budget submission to determine whether cost estimates are adequately supported. In commenting on a draft of this report, HUD agreed with the report’s major findings and conclusions. HUD also provided several comments to improve the report’s factual representation and to ensure clarity. In particular, HUD believed that we should recognize that its initial estimate of the impact of welfare reform on fiscal year 1998 budget needs was based on the best information available. We agreed and made changes to reflect this concern.
Pursuant to a congressional request and a legislative requirement, GAO: (1) reviewed the accuracy of the Department of Housing and Urban Development's (HUD) estimate of unexpended budget authority in the Section 8 tenant-based program; and (2) assessed HUD's budget formulation process for this program. GAO noted that: (1) in 1997, HUD estimated that $20.7 billion in unexpended budget authority existed in the Section 8 tenant-based program and that $9.9 billion of that amount was in excess to known program needs; (2) this is funding that housing agencies received under contracts with HUD but did not expend because the funding was not needed as planned to make housing assistance payments to landlords on behalf of low-income families; (3) because HUD based its estimate largely on the data in its tenant-based program's information system--which HUD's Office of Inspector General and an independent audit firm have tested and determined to be reliable--GAO believes that the estimate is reasonably accurate; (4) after Congress rescinded a total of $4.2 billion in June and October 1997 and HUD set aside $2.2 billion for unanticipated costs and to account for future transactions, the balance of $9.9 billion in excess unexpended budget authority was reduced to about $3.5 billion in October 1997 and placed in a congressionally established Reserve Preservation Account; (5) the budget information process that HUD used to prepare its fiscal year (FY) 1998 budget request for renewing Section 8 tenant-based contracts did not produce an accurate estimate of needs; (6) key HUD offices did not adequately oversee critical steps in the process, and the process did not require reasonable justification for substantial portions of the estimate--including several hundred million dollars proposed for contingency costs; (7) in addition, although at the time of its FY 1998 budget submission HUD had an estimate of the impact of welfare reform on the cost of the Section 8 program, more recent information caused HUD to conclude that including this estimate in the budget request was necessary; (8) as a result, HUD eventually lowered by $1 billion its FY 1998 budget estimate for renewing Section 8 contracts; and (9) to improve its process, HUD has further enhanced its tenant-based program's information system, consolidated its budget development with strategic planning and financial management, and changed its budget process; HUD also plans additional changes in these areas but does not have a timetable for accomplishing them.
State has authority to acquire, manage, and dispose of real property abroad. Specifically, the Foreign Buildings Act (Act) of 1926, as amended, authorizes the Secretary of State to acquire by purchase, construction, exchange, or lease sites and buildings in foreign cities for use by diplomatic and consular establishments of the United States. The Act allows State to alter, repair, furnish, and dispose of these properties, and to provide residential and office space and necessary related facilities to federal agencies abroad. It also authorizes the Secretary to apply disposal proceeds toward real property needs or to deposit proceeds into the Foreign Service Buildings Fund and use the proceeds for authorized purposes. OBO manages State’s real property abroad to support U.S. government presence at embassies and consulates, which are also known as missions or posts. This office is responsible for managing U.S. government-owned and government-leased real property, which includes land, structures, and buildings such as embassies, warehouses, offices, and residences. OBO coordinates directly with officials at posts tasked with managing the post’s real property. Posts are responsible for implementing OBO policies related to the management, acquisition, disposal, and reporting of real property, outlined in State’s FAM. Table 1 below provides an overview of OBO’s and the posts’ roles and responsibilities for real property management. In 2004, the administration added managing federal real property to the President’s Management Agenda and the President issued an executive order directing executive agencies to submit real property information annually for inclusion in a single, comprehensive database, which is now known as the Federal Real Property Profile (FRPP) that provides an annual report on the government’s real property holdings. State is currently undertaking a multiyear, multibillion-dollar capital- security construction program to replace 214 of its facilities abroad due to security concerns. State is taking these steps due to continuing threats and incidences such as the terrorist bombings in 1998 of embassies in Dar es Salaam, Tanzania, and Nairobi, Kenya, that killed more than 220 people and injured 4,000 others. The program incorporates the requirements of the Secure Embassy Counterterrorism Act of 1999 and instructs State to replace facilities at vulnerable posts and to require that all new diplomatic facilities be sufficiently sized to ensure that all U.S. government personnel at the post work onsite. Construction projects are prioritized by State’s annual risk matrix that ranks facilities based on their vulnerability across a wide range of security threats. In 2004, to aid in the construction of new embassies, a related program, the Capital Security Cost Sharing (CSCS) program was authorized, which required agencies with personnel overseas to provide funding for the construction of new, secure, and safe diplomatic facilities for U.S. government personnel overseas. State expects funding of $2.2 billion per year over a 5 year period through fiscal year 2018 to carry out new construction projects. Our analysis of State’s real property portfolio indicated that the overall inventory has increased. State reported its leased properties, which make up approximately 75 percent of the inventory, increased from approximately 12,000 to 14,000 between 2008 and 2013. However, comparing the total number of owned properties between years can be misleading because State’s method of counting these properties has been evolving over the past several years. OBO officials explained that in response to changes in OMB’s and FRPP’s reporting guidance, they have made efforts to count properties more precisely. For example, OBO has focused on separately capturing structural assets previously recorded as part of another building asset, such as perimeter walls, guard booths, and other ancillary structures. As a result of this effort, State recorded approximately 650 additional structural assets in its fiscal year 2012 FRPP report and approximately 900 more structures the following year in its fiscal year 2013 FRPP report, according to OBO officials. Additionally, OBO officials told us that former Department of Defense (DOD) properties in Iraq and Afghanistan were transferred to State; the largest of these transfers occurred in 2012 when State assumed responsibility from DOD for approximately 400 properties in Iraq. State reported additional changes in its real property portfolio, which are described below. Acquisitions: State reported spending more than $600 million to acquire nearly 300 properties from fiscal year 2008 through 2013 (see fig.1). State uses two sources of funding to acquire real property. It acquires land for building new embassy compounds (NEC) with funding from the CSCS program. It acquires residences, offices, and other functional facilities with proceeds from the disposal of unneeded property. In fiscal years 2008 through 2013, State reported spending approximately $400 million of these disposal proceeds to acquire approximately 230 properties. Disposals: From fiscal years 2008 through 2013, State reported selling approximately 170 properties. In doing so, it received approximately $695 million in proceeds (see fig.1). According to State, property vacated when personnel move into newly constructed facilities is the largest source of property that can be disposed of. When State completes construction of a NEC, personnel previously working in different facilities at multiple locations are then collocated into the same NEC, a move that provides State an opportunity to dispose of its former facilities. Further information on State’s acquisitions and disposals from fiscal year 2008 through 2013, can be found in figures 1 and 2 below. Leases: The majority of State’s leased properties are residences. State reported spending approximately $500 million on leases in 2013 and projects a potential increase to approximately $550 million by 2016 as growing populations in urban centers around the world push rental costs higher and the U.S. government’s overseas presence increases. OBO provides guidance to posts for disposing of unneeded properties as the post prepares to move into a NEC. In Belgrade, OBO is working with the post to sell an old embassy that is no longer needed following the completion of Belgrade’s NEC. Post officials told us that relocating to the NEC in April 2013 allowed them to market their old embassy and terminate multiple leases. In London, State sold its existing embassy building in August 2013 to fund the construction of a NEC. State is leasing the existing building until construction of the NEC is completed, which is expected in 2017. NEC construction has also provided State the opportunity to sell residential properties that are not located near the new embassies under construction. For example, according to post officials in London, transitioning to the NEC in London allowed State to make cost effective changes in its residential property portfolio by selling valuable older properties near the current embassy and purchasing newer lower cost residences near the NEC. State reports these types of real property transactions to Congress quarterly. Also, as required, State submits annual reports to Congress listing surplus overseas properties that have been identified for sale. For example, our analysis found that State listed 39 properties that it identified for disposal in its fiscal year 2013 annual report to Congress. Some properties identified as unneeded in State’s fiscal year 2013 FRPP report were not included in the 2013 annual report to Congress, such as a former embassy in Tashkent, Uzbekistan; land in New Delhi, India, and Manila, Philippines; and various properties in Beijing, China. According to OBO officials, the annual reports to Congress do not include unneeded properties they expect to retain or have determined they cannot sell for various reasons, such as host government restrictions related to diplomatic or political differences. For example, according to a State IG report, after State refused to pay what it considered an illegal tax to support the Brazilian social security system in 1996, the government of Brazil blocked the disposal of all U.S. diplomatic properties in the country. OBO officials told us that they do not report unneeded properties that cannot be sold because the Congressional reporting requirement is to list surplus properties that have been identified for sale. State’s officials said that they consider many factors in managing their real property portfolio, specifically in terms of identifying and disposing of unneeded property, as well as in purchasing and leasing property. The officials also described challenges associated with each of those aspects of managing the real property portfolio. State collects data on costs associated with properties identified for disposal to track costs, but we found that posts did not use the required code to track these costs consistently. As a result, this raises questions about the extent to which posts worldwide are using the code as State intends, and the extent to which State is receiving accurate and comprehensive cost information about its properties. We requested to review 202 files from fiscal year 2008 through 2013 on acquisitions, disposals, and leases, but were only provided 90 files since, according to State officials, the files were not centrally located and too time consuming to find and provide within the time frame of our review. State was able to provide most of the “core” documents agreed to, although some of the documentation was missing for the 90 files provided. For example, State provided all 36 of the requested lease files, but some documentation that FAM and OMB directs State to retain, and that State agreed to provide, was missing for 30 of the 36 lease files provided. OBO officials told us that they work with posts to identify and dispose of unneeded properties primarily using factors outlined in FAM, along with other strategies. FAM lists 18 factors that OBO and posts might consider when identifying and disposing of property (see table 2), such as whether (1) the property has excessive operating costs, (2) State used the property only irregularly, or (3) the property is uneconomical to retain. Officials at two of the four posts we visited told us that they were aware of and use the guidelines to identify unneeded property. Officials at a third post that owned property was unaware of the guidelines, but told us they used excessive maintenance costs to identify properties for disposal. Excessive maintenance cost is one of the 18 listed factors in FAM. OBO also uses other strategies to help identify unneeded property, such as: (1) reviewing the Department’s internal property database to identify properties newly classified by posts as unneeded, (2) monitoring new construction to identify property vacated as personnel move to new facilities, (3) reviewing reports of State’s (IG) for recommendations on disposals, and (4) evaluating changing political conditions and evolving post conditions to help right-size a post’s real estate portfolio. Once posts identify and OBO approves a property as unneeded, OBO takes the lead in disposing of the property. For example, OBO sold residences in London in fiscal year 2012 and an embassy in fiscal year 2013 (see fig.3), and the Department received approximately $497 million in proceeds that State is using to design and build the new London embassy and to obtain replacement residences closer to the new embassy (see fig.4). OBO also sold a residence in Helsinki in fiscal year 2011 and received approximately $657,000 that was deposited back into its asset management account for other real property needs worldwide. OBO officials acknowledged challenges with disposing of unneeded properties. These challenges included: the condition and location of facilities, changing missions in countries, and diplomatic reasons or political situations that require State to retain property previously marked as unneeded. For example, unneeded residential units can be in poor condition, which makes selling them challenging. Also, officials told us that the State’s primary mission of diplomacy overrides property disposal. In countries such as Mexico, Brazil, and India, policy changes with the diplomatic mission have led to retaining property previously marked as unneeded. For example, in Ciudad Juarez, Mexico, a new consulate was built; however, State retained property to accommodate and expand their mission. Officials at the posts we visited also described some past and recurring challenges to disposing of unneeded real property: Officials at the Helsinki and Sarajevo posts told us that differing opinions between OBO and posts about whether to dispose or retain unneeded property presented challenges. For example, officials in Helsinki told us they wanted to dispose of two unneeded residential properties in 2014 because of excessive maintenance costs and a longer commuting time due to the need to take mass transit because parking space was eliminated at the renovated embassy (see fig. 5). However, OBO officials told the post to retain and assign staff to the two properties for an additional 3 years. OBO believed that marketing the two properties, located next to two additional unneeded properties they had been attempting to sell since 2011, would possibly depress the disposal price if all the properties were marketed at the same time. However, post officials believe it will cost the post and State more in maintenance costs to bring the properties to a state of good repair, and believe selling the properties now would be more financially beneficial than retaining the properties for an additional 3 years as the costs to maintain the property would outweigh the potential for increased proceeds. OBO officials told us that they conduct an internal review to determine the financial benefit of whether to retain or sell properties in these situations as the agency attempts to maximize the disposal value of property. Officials at the Sarajevo post told us that they have had ongoing discussions with OBO about retaining their old embassy and converting it to a new Ambassador’s residence. Post officials told us that OBO originally wanted the post to dispose of its interest in the embassy—which State has been leasing for only $1 per year since 1994 with the option to continue the lease at this rate for 150 years. OBO officials told us that, at this below-market lease rate of $1 per year, they anticipated that the disposal of this leasehold interest could generate proceeds for State. However, OBO and post officials told us that the host government denied the Department’s request to transfer the lease to a third party. Given the Department’s inability to transfer or sell its interest in the property, OBO and the post reached an agreement to retain the embassy and convert it into an Ambassador’s residence. When the conversion is complete the post will terminate the lease for its current Ambassador’s residence, which has an annual lease cost of $144,000. Officials at the Helsinki and Belgrade posts told us that OBO’s process for appraising and marketing properties for sale was a challenge in disposing properties in a timely manner. Specifically, the post officials thought OBO’s real estate firm’s appraisals were too high and made the properties unsellable. OBO acknowledged that ensuring an accurate appraisal price presents challenges and therefore, it also reviews appraisals internally. Also, post officials in Helsinki and Belgrade told us that the global real estate firms OBO hired to market their properties did not have local offices, and thus may have not fully understood the local real estate market. For example, Belgrade post officials told us that an affiliate office in Hungary was marketing their old embassy, and that a Hungarian phone number was the primary number used to market the property, which they believe made selling the property more challenging (see fig. 6). OBO officials told us that they believe the global firms they contract with are more experienced than many local firms. Officials at the Belgrade post told us about zoning challenges with the host government that have delayed the disposal of their old embassy. They told us OBO notified the post that it would sell the old embassy once the new embassy had been built. However, post officials told us they have had to resolve zoning issues with the host government before the embassy could be sold. OBO officials told us that the old embassy was zoned for diplomatic use and that the process to change the zoning to mixed-use is under way. OBO and post officials have worked with the host government, and post officials believe the decision to zone the property for commercial and residential use will increase the disposal price of the property. OBO collects data on costs associated with unneeded properties identified for disposal to track costs associated with properties before their disposal, but the data do not specify costs associated with individual properties. Once OBO approves a property as unneeded, each post should charge a specific internal accounting code designated for property acquisition and disposal costs. OBO officials told us that each post is required to charge costs for property to this code so OBO can track the costs to maintain the property before the property is disposed by State. For example, these types of costs would include utilities, legal fees, and security services. Posts charged approximately $11.1 million to this code from fiscal year 2008 through 2013, according to the data provided by OBO. We found that the four posts we visited did not use this code consistently. State’s Foreign Affairs Handbook instructs posts to use the code to record costs related to the disposal of unneeded real property, but does not describe in detail the types of costs that can be charged to this account. Specifically, the Foreign Affairs Handbook includes the following information on this accounting code: “7541 Real Estate-Program Costs: Costs in support of the acquisition and disposal of State real property.” OBO officials told us costs for unneeded properties that should be charged to this code include disposal costs for government-owned buildings, such as guard, maintenance, utility, and other building operating costs of vacant/unneeded property until sold. Although State relies on this account to monitor costs associated with disposal of unneeded properties, on our site visits we found that officials at one post did not know they could use this account for costs related to properties identified for disposal, such as utility bills and condominium fees while marketing the property. This post charged these costs to its routine maintenance account not intended for unneeded properties. Post officials thought the code for unneeded properties was used to process the disposal, and not for ongoing costs related to the property while the property was being marketed for disposal. Officials at the other two posts we visited that had unneeded property for disposal used the code to charge all of their related costs while they marketed the property for disposal. We found posts in other countries with unneeded properties identified for disposal in fiscal year 2013 had not charged expenses to this account during that fiscal year such as posts in Jamaica, Ukraine, Tunisia, and Namibia. OMB’s capital-planning guidance states that reliable data are critical to managing assets effectively. According to this guidance, only valid, complete, relevant, reliable, and timely data can help the agency make informed decisions regarding the allocation of resources. Additionally, government-wide internal control standards state that pertinent financial and operating information should be recorded and communicated to management and others within a time frame that enables them to carry out their internal control and other responsibilities. State will be unable to capture and maintain complete and accurate information on the operating costs for properties identified for disposal if posts do not consistently charge costs related to these properties to the designated account. This raises questions about the extent to which posts worldwide are using the code as State intends and the extent to which State is receiving accurate and comprehensive cost information about its properties. For example, State may not have the information it needs to make a decision to accept or decline an offer for a property when attempting to maximize revenue for a property disposal. In addition, posts may not have sufficient funding for routine property maintenance because they are using their designated routine maintenance funds on unneeded properties, which could reduce the amount of funding they have available for maintenance of other properties. This could impact the upkeep of posts’ current real-property portfolio and increase the amount for deferred maintenance. We have previously reported that deferring maintenance and repair can reduce the overall life of federal facilities, lead to higher cost in the long term, and pose risks to safety and agencies’ missions. OBO officials said that they would like to reduce the number of leased properties in State’s portfolio and increase federally owned property. OBO officials told us that owning more housing will save on aspects of lease costs, such as exchange-rate fluctuations, rapid inflation, and rising property rents. The officials added that currently 15 percent of State’s residential properties are federally owned, but officials would like to eventually increase this number to 40 percent. They told us that based on the average cost of a property’s acquisition, along with expected reinvestment of disposal proceeds on a yearly basis; it will take about 50 years to reach this ownership target. Officials believe it is not cost effective or feasible to own 100 percent of properties due to the inability to own properties in some countries, high maintenance costs of owning properties in some countries, and the lack of flexibility in dealing with staffing changes. OBO officials told us that they consider the unique facts and circumstances of each country when deciding whether to lease or acquire properties. We have previously reported on the federal government’s over-reliance on leasing, which could be more expensive in the long term than the cost to acquire property. State relies on its Opportunity Purchase Program to fund real property acquisitions, and to reduce its need to lease space. The Opportunity Purchase Program reinvests proceeds from property disposals to acquire real properties other than new embassy construction. According to OBO officials, the program allows State to acquire properties in order to avoid costs because State officials conduct a lease-versus-purchase analysis to measure savings from owning rather than leasing over an expected time frame they plan to retain a property. OBO officials told us that over the last several years the program has generated investment returns from its acquisitions that typically range from 7 percent to 10 percent. As funding from disposals becomes available, OBO reviews attractive purchasing markets and security needs at the approximately 275 posts and narrows down purchasing opportunities to 12 to 15 posts. OBO officials told us they notify the post that they have been selected for the program, and the post provides acquisition opportunities for OBO to review. OBO officials told us that disposals are unpredictable to forecast on an annual basis, making planning and funding for these acquisitions difficult. The Belgrade post is an example of where State has employed the Opportunity Purchase Program. State acquired four residential units in Belgrade for approximately $2.1 million in fiscal year 2013 (see fig. 7). According to OBO, from fiscal year 2006 through 2013, the Opportunity Purchase Program has produced approximately $16 million annually in lease cost avoidance and will provide another projected $6 million in lease cost avoidance once all pending acquisitions are completed. Post and OBO officials we interviewed echoed similar views on the preference of owning versus leasing based on the real estate market in each post’s location. Post and OBO officials told us that the conditions of a specific location, such as the local real estate market and the mission of the post influence the decision to own or lease. For example, post officials in Helsinki told us that properties are costly to acquire and expensive to maintain in the area. They said leasing is a better option because it provides flexibility when staffing changes occur, and the property owners in the area are reliable and responsive. Post officials in Sarajevo told us that because of the instability of the real estate market and possible future changes in embassy staffing, it is more practical to lease residential housing. On the other hand, post officials in Belgrade told us that they would like to own more residential units because of the difficulty in finding quality housing to lease. OBO officials told us they prefer a mix of owned and leased housing to provide a stable housing pool, manage rental costs, and provide flexibility as mission requirements change, and officials seek to acquire housing in markets where they can acquire quality housing and where it is cost effective to own rather than lease. In addition to acquisitions, OBO and post officials described several steps they have taken to reduce costs associated with leasing: OBO reviews its highest cost expiring leases annually to determine if State is obtaining a market rate for these properties and if leases should be renewed or replaced. Officials told us that this review includes 100 of the most costly leases worldwide and is used to assist posts that take the lead in monitoring and securing lease renewals. OBO officials told us that in fiscal year 2014 after this review, they determined that 30 percent of leases were prospects for exploring whether rents can be reduced. Under FAM, appraisals or other documentation such as a market study or a design review for the acquisition and renewal of major leases are used for each transaction. OBO meets this guidance by providing fair-market rental estimates, market studies, surveys, and legal direction for posts. OBO is attempting to maximize the cost effectiveness of its leased portfolio. OBO officials told us they implemented a rental benchmark program in 2007 to help ensure the U.S. government pays the prevailing market rate, and does not overpay for leased housing. Officials told us that 25 posts were involved with the program when it began in 2007 and that it covered 171 posts in 2013. OBO works with posts and contracts with real estate experts to provide rental ceilings for leased residential properties at each post. OBO uses these ceilings to set a cap on the amount a post can spend on leased residential property, and if a post exceeds that cap, OBO must approve a waiver. OBO officials told us that they conduct a quarterly review of the posts to see that they are in compliance and that the program incentivizes posts to stay within their rental ceilings to secure cost-effective leases. Belgrade post officials spoke highly of the program as it has reduced the post’s administrative burden in seeking waivers, by providing a more realistic ceiling, which has allowed the post to secure housing in a timelier manner. Also, OBO officials told us that the program has resulted in savings by slowing down the growth of leasing costs. Post and OBO officials told us that they proactively renegotiate leases to reduce costs. Officials at all four posts we visited told us that their locally employed staff had established strong working relationships with property owners from years of real estate experience. Post officials told us that the locally employed staff were instrumental in negotiating reduced lease costs. For example, one post official told us that the post secured office space for 30 percent below market value, and officials from another post told us that they were in the process of securing a new leased warehouse space that would save $50,000 to $80,000 per year due to the expertise of the local staff working at the post. In addition, posts and OBO have successfully renegotiated leases since fiscal year 2011 in St. Petersburg, Russia; Paris, France; La Paz, Bolivia; Budapest, Hungary; and Tokyo, Japan that have produced approximately $3.5 million in savings. Also, OBO officials told us that in their estimation, the lease waiver program avoided $43 million in lease costs by working with overseas posts to locate less costly property, renegotiating lease terms, and by rejecting approval of proposed rent increases or higher cost replacement properties. OBO could not provide us all the real property files we requested for acquisitions and disposals between fiscal year 2008 through 2013, except for the files pertaining to leases. Specifically, we requested 202 files which included property disposals, acquisitions, and leases, but OBO stated it was only able to provide 90 of the files because these files were not centrally located and too time consuming to find and provide within the timeframe of our review. OBO agreed to provide us “core” documents for acquisition and disposal files; however some of the documentation was missing in the files we reviewed. In addition, although OBO was able to provide all the lease files requested we found the lease files to be incomplete based on FAM and OMB guidance (see Table 3). Without the missing files and documentation, it is unclear how efficiently and effectively State is managing its overseas real property. Acquisitions and Disposals: Under FAM, OBO and posts should create and preserve records containing adequate and proper documentation of the decisions, procedures, and transactions or operations of the Department and posts. Further, Standards for Internal Control in the federal government states that an agency should establish control activities to ensure that the agency completely and accurately records all transactions. These standards explain that control activities include activities such as the creation and maintenance of related records that provide evidence of execution of these activities as well as appropriate documentation. OBO told us that they were unable to provide all of the information for acquisitions and disposals as requested because files were not centrally located, maintained by different groups within State, and too time consuming to find and provide within the time frame of our review. Thus, OBO officials agreed to provide what they considered “core” documents, which were a subset of the documentation we requested based on our analysis of FAM and OMB guidance. State was able to provide most of the “core” documents agreed to, although some of the documentation was missing. For example, we found instances of acquisition files missing deeds and disposal files missing deposit slips, which were both core documents State agreed to provide. Furthermore, since we received only core documents, we could not determine whether the work to meet additional FAM and OMB guidance was conducted and the records were missing, or if this work was not conducted at all. Without this information, it is unclear whether State is consistently following its internal FAM and external OMB guidance, and how State officials made real property decisions. These findings are similar to those of State’s IG which found significant vulnerabilities due to inadequate file documentation that could potentially expose the Department to substantial financial losses. Leases: State was able to provide all 36 of the requested lease files, but some documentation listed in FAM and OMB guidance was not in 30 of the 36 of the files we reviewed. For example, State guidance directs OBO to complete documentation for leases such as: a lease agreement and documentation of OBO’s approval. Additionally, OMB directs executive branch agencies, such as State, to conduct a lease-versus-purchase analysis when deciding to lease or acquire properties to ensure all leases are justified as preferable to direct U.S. government purchase and ownership.All 36 files contained a lease agreement. However, only 6 of the 36 files contained all of the information that FAM directs State to retain and that State agreed to provide. These findings are similar to those of State’s IG which found that the Department’s process to monitor lease information provided by posts was not always effective. The IG found numerous recorded lease terms that did not agree with supporting documentation. We found that 30 of 36 files lacked either documentation of OBO’s approval or a lease-versus-purchase analysis, or both. OBO officials told us they do not conduct a lease versus purchase analysis when purchasing is not an option, such as in cases where there is a lack of sufficient funding or the property is in a country that does not allow non-domestic ownership. According to OBO, 6 of the 36 leases in our review were for space in a country that did not allow non- domestic ownership; however, the files did not include documentation that this was the case. We have previously found that without a lease-versus- purchase analysis, decision makers lack financial information on the long- term decisions to lease rather than own. Also, we have previously found that when this analysis has been conducted in the federal government that such analysis has identified savings from owning versus leasing. State manages a multibillion dollar portfolio of buildings, land, and structures at approximately 275 posts throughout the world and has $7.5 billion in projects currently under design and construction. The Department has taken a number of measures to improve management of these properties. These measures include actively identifying unneeded properties, providing posts with rental cost parameters, and other cost- saving initiatives. Despite these steps in managing the real property portfolio, State cannot identify the cost associated with properties identified for disposal, which may compromise State’s ability to make fully informed decisions because of unclear guidance. Furthermore, State could not provide some key documents we requested for our review pertaining to acquisitions, disposals, and leases of its properties worldwide. As a result, the Department may not be able to ensure that it is making cost-effective decisions about properties. Improvements in these areas will become more important as State constructs additional NECs and disposes properties no longer needed when personnel relocate to new facilities. To improve State’s management of real property overseas and enhance State’s accountability and ability to track real-property management decisions, the Secretary of State should take the following four actions: 1. Clarify accounting-code guidance to the posts for tracking expenses related to disposal of unneeded properties. 2. Take steps to ensure that documents related to real property acquisitions are prepared and retained in accordance with FAM and OMB guidance. 3. Take steps to ensure that documents related to real property disposals are prepared and retained in accordance with FAM and OMB guidance. 4. Take steps to ensure that documents related to real property leases are prepared and retained in accordance with FAM and OMB guidance. We provided a draft of this product to the Department of State (State) for review and comment. In written comments, reproduced in appendix II, State concurred with the report’s recommendations. State provided technical clarifications that were incorporated as appropriate. We are sending copies of this report to the appropriate congressional committees and the Secretary of State. In addition, the report is available at no charge on the GAO website at www.gao.gov. If you or your staff have any questions about this report, please contact either of us at (202) 512-2834 or wised@gao.gov or (202)-512-8980 or courtsm@gao.gov. Contact points for our Office of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. To determine what is known about the Department of State’s (State) real property inventory, we reviewed State’s Federal Real Property Profile (FRPP) data for fiscal years 2008 through 2013—the time period of our review. Additionally, we reviewed State’s real property reports to Congress and compared these with State’s annual FRPP reports to the General Services Administration. We determined that FRPP data were sufficiently reliable for the purpose of reporting approximate numbers of properties in State’s portfolio by interviewing knowledgeable Bureau of Overseas Buildings Operations (OBO) and post officials about data quality assurance procedures and reviewing related documentation, including previous GAO and State Inspector General (IG) reports, data dictionaries and user manuals, and data verification practices. We also reviewed State’s internal report on costs associated with properties identified for disposal to determine costs for unneeded properties that State is selling. To evaluate the reliability of State’s real property database we interviewed OBO and post officials and locally employed staff responsible for entering real property data at the four posts we visited. We also examined OBO’s policies and processes for entering information into its real property database and issues affecting quality control over this information. Although we identified data reliability issues for some facilities in State’s real property database, as those issues generally involved the classification or description of facilities, we determined that the data were sufficiently reliable to describe the approximate number of U.S. properties overseas. To determine what factors State considers in managing its real property portfolio and the extent to which it documents its decision-making process, we reviewed sections of the Foreign Affairs Manual (FAM) applicable to property management overseas and documents prepared by State officials in response to our questions. We reviewed State’s data on costs associated with unneeded properties identified for disposal for fiscal years 2008 through 2013. We found the data had limitations, which we discuss in the report. We reviewed documentation that State provided for its real property disposals, acquisitions, and leases from fiscal years 2008 through 2013. We requested files on all 94 property disposals and 72 property acquisitions reported during this period. State provided 20 of the 94 disposal files we requested and 34 of the 72 acquisition files, which included all of the 2013 files. We also requested, and were provided with, all 36 major leases with $500,000 or more in annual rent, as defined in the FAM, that were active from fiscal years 2008 through 2013 and still were listed as active in FRPP at the end of fiscal year 2013. To evaluate the completeness of these files we compared State’s documentation of real property disposals, acquisitions, and leases to the documentation directives listed in FAM and relevant Office of Management and Budget (OMB) Circulars. We also obtained information on how State reinvested revenue generated from property disposals between fiscal years 2008 through 2013. While our review of these disposals, acquisitions, and leases provides key insights and illustrates recent products of State’s real property policies and guidance, the results of our review should not be used to make generalizations about all State disposals, acquisitions, and leases. We interviewed State Department officials at OBO and at four selected posts (Belgrade, Serbia; Helsinki, Finland; London, United Kingdom; and Sarajevo, Bosnia, and Herzegovina) to gather information on unneeded properties, disposals, acquisitions, and leases. We selected these posts because they had (1) ongoing or recently completed embassy construction or renovation projects without disposing of properties, (2) properties reported as identified for disposal for multiple years without being disposed of, and (3) a mix of owned and leased properties. We based our site visit selection on these factors in order to observe posts with (1) higher numbers of property disposals than other posts due to recently completed or ongoing construction of new embassies, (2) persistent challenges in selling unneeded properties, and (3) experience managing both owned and leased properties. The results of the case studies provide insight into State’s management and decision-making practices but cannot be generalized for the purposes of this review. We conducted this performance audit from June 2013 to September 2014 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contacts named above, Amelia Shachoy and Hynek Kalkus, Assistant Directors; Joshua Akery, George Depaoli, Colin Fallon, Hannah Laufe, Grace Lui, Josh Ormond, Nitin Rao, Kelly Rubin, Ozzy Trevino, and Crystal Wesco made key contributions to this report.
The Department of State (State) holds or leases about 70-million square feet of real estate in about 275 posts worldwide and has the authority to construct, acquire, manage, and dispose of real property abroad. GAO was asked to review State's management of overseas real property. This report examines: (1) what is known about State's overseas real property inventory, and (2) what factors State considers in managing its overseas real property portfolio and to what extent it documents its decision-making process pertaining to real property. GAO requested 202 files for all acquisitions, disposals, and major leases pertaining to State's management of its real property abroad for the period from 2008-2013. In addition, GAO interviewed State officials in headquarters and at four posts abroad, selected because they had (1) ongoing or recently completed embassy construction or renovation projects without property disposals, (2) properties reported as identified for disposal for multiple years without being disposed, and (3) both owned and leased properties. The results of the four case studies cannot be generalized for the purpose of this review. GAO's analysis of the overseas real property portfolio of the Department of State (State) indicates that the overall inventory has increased in recent years. State reported that its leased properties, which make up about 75 percent of its inventory, increased from approximately 12,000 to 14,000 between 2008 and 2013. State's numbers of federally owned properties increased, but comparing the total number of owned properties from year to year can be misleading because State's method of counting these properties has been evolving over the past several years. Specifically, according to State officials, they have been revising their method for counting properties to produce more precise counts and to meet reporting guidance from the Office of Management and Budget (OMB), among others. For example, State began counting separately structural assets previously included as part of another building's assets, such as guard booths or perimeter walls, and consequently reported approximately 650 additional structural assets in fiscal year 2012 than in 2011, and approximately 900 more structures in 2013. State officials told GAO that they consider many factors in managing real property; however, GAO found State's available data and documentation on management decisions were limited. State officials said that they work with overseas posts to identify and dispose of unneeded properties, primarily using factors in State's Foreign Affairs Manual ( FAM ) guidance. Such factors include identifying properties deemed obsolete or with excessive maintenance costs. State collects data on costs associated with unneeded properties identified for disposal, relying on posts to charge all such costs to a specific accounting code. The four posts GAO visited did not use this code consistently. For example, officials at one post charged some disposal costs to a routine maintenance account. Officials at the other posts with properties for sale used the code to charge all related disposal costs. GAO also found that other posts with unneeded properties identified for disposal in fiscal year 2013 had not charged expenses to this account. The guidance provided in the FAM for using this code does not detail the types of costs that can be charged. This omission raises questions about the extent to which posts use the code as State intends and the extent to which State receives accurate and comprehensive cost information about its unneeded properties. State, without accurate data on unneeded property, may not have the information it needs to make a decision about property offers when attempting to maximize revenue for property sales. Also, posts may not have sufficient funding for routine property maintenance if they use funds designated for this type of maintenance on unneeded property. GAO requested to review 202 files between fiscal year 2008 through 2013 on acquisitions (72), disposals (94), and leases (36), but was provided 90, as State told GAO that these files were not centrally located and too time consuming to find and provide during the time frame of our review. State provided most of what it considers “core” documents for the acquisition and disposal files, but these documents do not constitute all of the documentation listed in the FAM and OMB guidance. In addition, although State provided all 36 of the requested lease files, some documentation that State agreed to provide was missing for 30 of the 36 files. Without the missing files and documentation, it is unclear how efficiently and effectively State is managing its overseas real property. GAO recommends that the Secretary of State (1) clarify accounting code guidance for tracking expenses related to disposal of unneeded properties, and (2) take steps to collect and retain documents related to real property purchases, disposals, and leases in accordance with the FAM and OMB's guidance. State concurred with GAO's recommendations.
Our financial audits have found that IRS’ financial statement amounts for revenue, in total and by type of tax, were not derived from its revenue general ledger accounting system or its master files of detailed individual taxpayer records. The revenue accounting system does not contain detailed information by type of tax, such as individual income tax or corporate tax, and the master file cannot summarize the taxpayer information needed to support the amounts identified in the system. As a result, IRS relied without much success on alternative sources, such as Treasury schedules, to obtain the summary total by type of tax needed for its financial statement presentation. To substantiate the Treasury figures, our audits attempted to reconcile IRS’ master files—the only detailed records available of tax revenue collected—with Treasury records. For fiscal year 1994, for example, we found that IRS’ reported total of $1.3 trillion for revenue collections taken from Treasury schedules was $10.4 billion more than what was recorded in IRS’ master files. Because IRS was unable to satisfactorily explain, and we could not determine the reasons for this difference, the full magnitude of the discrepancy remains uncertain. In addition to the difference in total revenues collected, we also found large discrepancies between information in IRS’ master files and the Treasury data used for the various types of taxes reported in IRS’ financial statements. For fiscal year 1994, for example, some of the larger reported amounts in IRS’ financial statement for which IRS had insufficient support were $615 billion in individual taxes collected—this amount was $10.8 billion more than what was recorded in IRS’ master files; $433 billion in social insurance taxes collected—this amount was $5 billion less than what was recorded in IRS’ master files; and $148 billion in corporate income taxes—this amount was $6.6 billion more than what was recorded in IRS’ master files. Thus, IRS did not know and we could not determine if the reported amounts were correct. These discrepancies also further reduce our confidence in the accuracy of the amount of total revenues collected. Contributing to these discrepancies is a fundamental problem in the way tax payments are reported to IRS. IRS’ tax receipt, return, and refund processes are highlighted in figure 1. About 80 percent, or about $1.1 trillion, of total tax payments are made by businesses and typically include (1) taxes withheld from employees’ checks for income taxes, (2) Federal Insurance Compensation Act (FICA) collections, and (3) the employer’s matching share of FICA. IRS requires business taxpayers to make tax payments using federal tax deposit coupons, shown in figure 2. The payment coupons identify the type of tax return to which they relate, such as a Form 941, Quarterly Wage and Tax Return, but do not specifically identify either the type of taxes being paid or the individuals whose tax withholdings are being paid. For example, the payment coupon in figure 2 reports that the deposit relates to a Form 941 return, which can cover payments for employees’ tax withholding, FICA taxes, and employers’ FICA taxes. Since only the total dollars being deposited are indicated on the form, IRS knows that the entire amount relates to a Form 941 return but does not know how much of the deposit relates to the different kinds of taxes covered by that type of return. Consequently, at the time tax payments are made, IRS is not provided information on the ultimate recipient of the taxes collected. Furthermore, the type of tax being collected is not distinguished early in the collection stream. This creates a massive reconciliation process involving billions of transactions and subsequent tax return filings. For example, when an individual files a tax return, IRS initially accepts amounts reported as a legitimate record of a taxpayer’s income and taxes withheld. For IRS’ purposes, these amounts represent taxes paid because they cannot be readily verified to the taxes reported by an individual’s employer as having been paid. At the end of each year, IRS receives information on individual taxpayers’ earnings from the Social Security Administration. IRS compares the information from the Social Security Administration to the amounts reported by taxpayers with their tax returns. However, this matching process can take 2 and a half years or more to complete, making IRS’ efforts to identify noncompliant taxpayers extremely slow and significantly hindering IRS’ ability to collect amounts subsequently identified as owed from false or incorrectly reported amounts. Consistent with this process, IRS’ system is designed to identify only total receipts by type of return and not the entity which is to receive the funds collected, such as the General Fund at Treasury for employee income tax withholdings or the Social Security Trust Fund for FICA. Ideally, the system should contain summarized information on detailed taxpayer accounts, and such amounts should be readily and routinely reconciled to the detailed taxpayer records in IRS’ master files. Also, IRS has not yet established an adequate procedure to reconcile the revenue data that the system does capture with data recorded and reported by Treasury. Further, documentation describing what IRS’ financial management system is programmed to do is neither comprehensive nor up-to-date, which means that IRS does not have a complete picture of the financial system’s operations—a prerequisite to fixing the problems. Beginning with our audit of IRS’ fiscal year 1992 financial statements, we have made recommendations to correct weaknesses involving IRS’ revenue accounting system and processes. They include addressing limitations in the information submitted to IRS with tax payments by requiring that payments identify the type of taxes being collected; implementing procedures to complete reconciliations of revenue and refund amounts with amounts reported by the Treasury; and documenting IRS’ financial management system to identify and correct the limitations and weaknesses that hamper its ability to substantiate the revenue and refund amounts reported on its financial statements. With a contractor’s assistance, an IRS task force attempted to document IRS’ financial management system transaction flows. Because the contractor is not expected to complete this work until July 1996, it was not done in time to be useful in our fiscal year 1995 audit. Federal accounting standards provide new criteria for determining revenue, effective for fiscal year 1998. This will require IRS to account for the source and disposition of all taxes in a manner that enables accurate reporting of cash collections and accounts receivable and appropriate transfers of revenue to the various trust funds and the general fund. To achieve this, IRS’ accounting system will need to capture the flow of all revenue-related transactions from assessment to ultimate collection and disposition. We could not verify the validity of either the $113 billion of accounts receivable or the $46 billion of collectible accounts receivables that IRS reported on its fiscal year 1995 financial statements. Consequently, these financial statements cannot be relied on to accurately disclose the amount of taxes owed to the government or the portion of that amount which is collectible. This is not a new problem, as we first identified IRS’ accounts receivable accounting and reporting problems in fiscal year 1992 and again in each subsequent fiscal year’s financial audit. In our audit of IRS’ fiscal year 1992 financial statements, after performing a detailed analysis of IRS’ receivables as of June 30, 1991, we estimated that only $65 billion of about $105 billion in gross reported receivables that we reviewed was valid and that only $19 billion of the valid receivables was collectible. At the time, IRS had reported that $66 billion of the $105 billion was collectible. Subsequently, we helped IRS develop a statistical sampling method that, if properly applied, would allow it to reliably estimate and report valid and collectible accounts receivable on its financial statements. We evaluated and tested IRS’ use of the method as part of our succeeding financial audits and found that IRS made errors in carrying out the statistical sampling procedures, which rendered the sampling results unreliable. This year, for the first time, IRS tried, also without success, to specifically identify its accounts receivable. Reliable financial information on these amounts is important to IRS and the Congress for assessing the results of enforcement and collection efforts, measuring performance in meeting IRS’ mission and objectives, and allocating resources and staffing; reviewing the collectibility of accounts, determining trends in accounts receivable balances, and deliberating on the potential for increased collections and related budgetary needs; and assessing the effect of potential collections of accounts receivables in reducing the deficit. The importance of having credible financial information for these purposes is underscored by the magnitude of IRS’ inventory of uncollected assessments and by IRS’ problems in collecting tax receivables, which we have monitored since 1990 as part of our high-risk program. IRS’ reported inventory of uncollected assessments, which at September 30, 1995, was $200 billion, is composed of both compliance assessments, which are not yet but may become accounts receivable, and financial receivables, which are valid accounts receivable. In the case of compliance assessments, IRS records an assessment to a taxpayer’s account, but neither the taxpayer nor a court has agreed that the assessment is appropriate. Normally, IRS makes these assessments to encourage compliance with the tax laws. For example, when a taxpayer is identified by an IRS matching program as being delinquent in filing a return, IRS creates an assessment using the single filing status and standard deduction. This action is to encourage the taxpayer to file a tax return in the right amount. The taxpayer has an opportunity to refute an estimated assessment, and often does, because the amount may be overstated or may not apply. On the other hand, financial receivables arise when taxpayers agree to assessments or a court determines that an amount is owed. These receivables may also include cases in which IRS and a taxpayer agree, or a court determines, that the amount of a compliance assessment is due. Financial receivables can include other situations as well, such as when taxpayers file returns but do not pay the full amounts due or they are making payments against amounts due. Figure 3 shows IRS’ reported inventory of uncollected assessments for June 30, 1991, and each fiscal year from 1992 through 1995.
GAO discussed its financial audits of the Internal Revenue Service for fiscal years 1992 through 1995. GAO noted that: (1) IRS relied on alternative sources to obtain revenue totals by type of tax for its financial statements; (2) IRS financial statements include various discrepancies that cannot be explained because of weaknesses in IRS information and collection systems; (3) the validity of IRS accounts receivable and collectible accounts receivable can not be verified; (4) many uncollected compliance assessments and financial receivables are uncollectible; (5) IRS has been unable to accurately account and report its total inventory of accounts receivable; (6) while IRS has made some improvements in accounting and reporting on its operating costs, significant problems remain; (7) IRS can not confirm when and if goods and services were received; and (8) the accuracy of the IRS Fund Balance with Treasury accounts cannot be verified.
The Great Lakes Basin covers approximately 300,000 square miles, encompassing Michigan and parts of Illinois, Indiana, Minnesota, New York, Ohio, Pennsylvania, Wisconsin, and the Canadian province of Ontario (see fig. 1), as well as lands that are home to more than 40 Native American tribes. It includes the five Great Lakes and a large land area that extends beyond the Great Lakes, including their watersheds, tributaries, and connecting channels. The Great Lakes contain nearly 90 percent of the surface freshwater in North America and 20 percent of the surface freshwater in the world. The Great Lakes provide drinking water; recreation opportunities, such as swimming, fishing, and boating; and economic benefits, including tourism, agriculture, and shipping, for an estimated 40 million people. In addition, nearly 7 percent of U.S. agricultural production comes from the basin, according to EPA. Numerous environmental stresses threaten the health of the Great Lakes and adjacent land within the Great Lakes Basin. The Great Lakes has long been an area that attracted development, population, industry, and commerce, starting with the canals that joined the lakes to the eastern seaboard and allowed goods to be trafficked and traded between the Midwest and eastern states. Various environmental quality issues, particularly water quality pollution and contaminated sediments, have resulted from mining, timber harvest, steel production, chemical production, and other industrial activities that developed around the Great Lakes. Currently, all of the Great Lakes and the majority of the water bodies in the region are under fish consumption advisories, issued by state and provincial health agencies, due to mercury pollution primarily from coal-fired power plants. In addition, the fertile soil in the surrounding states makes them highly productive agricultural areas, and this has resulted in large amounts of nutrients such as phosphorus and nitrogen— as well as sediment, pesticides, and other chemicals—running off into the Great Lakes. Moreover, large population centers on both sides of the U.S. and Canadian border use the Great Lakes to discharge wastewater from treatment plants, which also introduces nutrients into the Great Lakes. Even with progress in reducing the amount of phosphorus in the lakes through mitigation techniques used in the 1970s, harmful algal blooms are once again threatening the Great Lakes Basin. These are a result of increases in phosphorus and nitrogen entering the lakes from nonpoint sources of runoff from urban and rural areas. The United States has long recognized the threats facing the Great Lakes and has developed agreements and programs to fund and support restoration actions, including the following: In 1972, the United States and Canada agreed to take action by signing the Great Lakes Water Quality Agreement to restore, protect, and enhance the water quality of the Great Lakes to promote the ecological health of the Great Lakes Basin. The countries signed another Great Lakes Water Quality Agreement in 1978, which was amended several times. For example, most recently, in 2012, the nations added provisions to the agreement to address the effects of climate change, among other things. In 1987, an amendment to the Great Lakes Water Quality Agreement resulted in the United States and Canada formally identifying a total of 43 severely degraded locations in the Great Lakes Basin as specific Areas of Concern, 31 of which are located entirely or partially in the United States. These areas are defined as “geographic areas where a change in the chemical, physical, or biological integrity of the area is sufficient to cause restrictions on fish and wildlife or drinking water consumption, or the loss of fish and wildlife habitat, among other conditions, or impair the area’s ability to support aquatic life.” The 1987 amendment also required the nations to develop and implement remedial action plans for the Areas of Concern. In 2002, the Great Lakes Legacy Act authorized EPA to carry out sediment remediation projects in the 31 Areas of Concern located entirely or partially in the United States, among other things. For fiscal years 2004 through 2009, EPA’s budget authority totaled $162 million for work under this act, according to an OMB report. Of the 12 Areas of Concern located entirely in Canada, 3 have been delisted. Areas of Concern had been completed, as of October 2014, but formal delisting had not yet occurred, according to EPA. The United States also recognized the growing pressures on the fish and wildlife resources of the Great Lakes Basin and developed plans to address these. For example, federal and state agencies became aware of the growing threat of invasive species, such as the sea lamprey, which is a parasite that can each kill up to 40 pounds of fish in its lifetime and was a major cause of the collapse of lake trout, whitefish, and chub populations in the Great Lakes during the 1940s and 1950s. Again, the United States took a series of actions as follows: The Great Lakes Fish and Wildlife Restoration Act of 1990 directed the Fish and Wildlife Service to conduct a comprehensive study of the status of, and the assessment, management, and restoration needs of, the Great Lakes Basin’s fishery resources and to develop proposals for implementing the study’s recommendations. The Nonindigenous Aquatic Nuisance Prevention and Control Act of 1990 established the Aquatic Nuisance Species Task Force and required it to develop and implement a program for waters of the United States to prevent introduction and dispersal of aquatic nuisance species; to monitor, control, and study such species; and to disseminate related information. The act also directed the Great Lakes Commission to establish the Great Lakes Panel on Aquatic Nuisance Species and directed the panel to identify Great Lakes aquatic nuisance species priorities and coordinate, where possible, aquatic invasive species program activities in the region that are not conducted under the act, among other things. Members of the panel, which meets twice a year, include U.S. and Canadian federal agencies, the eight Great Lakes states and the provinces of Ontario and Québec, local communities, and tribal authorities. In 2009, the President created the Asian Carp Regional Coordinating Committee to coordinate efforts, including local, state, federal, and international efforts, to prevent Asian carp from spreading and becoming established. The term Asian carp refers collectively to four species of carp—including bighead and silver carp—that are native to Asia and were first introduced into the United States in 1963. Their rapid expansion and population increase can decrease populations of native aquatic species, in part by consuming vast areas of aquatic plants that are important as food and spawning and nursery habitats. Efforts to prevent Asian carp from entering the Great Lakes include the capture and removal of these fish from nearby waterways (see fig. 2). Since 2010, the committee has issued an annual Asian Carp Control Strategy Framework that outlines efforts to support activities that will directly prevent the introduction and establishment of Asian carp populations in the Great Lakes.most recent framework, for 2014, in June 2014. GLRI is implemented through a number of projects, large and small, carried out by the Task Force agencies or recipients of GLRI funds. One way that the Task Force agencies conduct GLRI work is to use financial agreements with nonfederal entities, such as grants and cooperative agreements, that provide funds to conduct specific projects. Grants and cooperative agreements are to be used when the principal purpose of a transaction is to accomplish a public purpose or action authorized by federal statute.using agency employees to carry out projects—which we refer to as Another way that the agencies conduct GLRI work is by agency-conducted work—or contracting with nonfederal entities to carry out projects. Contracts are to be used when the principal purpose is to purchase property or services for the direct benefit or use of the federal government. OMB is responsible for developing governmentwide guidance for the management of grants and cooperative agreements. Until December 2013, OMB provided guidance in the form of circulars for specific grants management areas to different types of grantees. In December 2013, OMB consolidated its grants management circulars into a single uniform guidance document. Requirements for contracts are found in the Federal Acquisition Regulation (FAR). Among other things, OMB’s circulars direct federal agencies to require progress and financial reports from academic institutions, nonprofit organizations, and state, local, and tribal entities that receive grants or are parties to cooperative agreements. For contracts, agencies can require such reports from contractors. We provided a draft of this report to EPA in May 2015. In response, EPA officials informed us that the agency had replaced GLAS with the Environmental Accomplishments in the Great Lakes (EAGL) information system. Because EPA did not alert us to this new system until June 2015, we could not include a review of EAGL in this report. fields and instructions on how to enter project data into GLAS. GLAS was not a financial management system, and the Task Force agencies used their own financial management systems to track funding. In our September 2013 report, we conducted a survey of nonfederal recipients of GLRI funding and found that several factors outside the scope of the Action Plan can limit GLRI progress. These factors include inadequate infrastructure for wastewater or storm water treatment and the effects of climate change. We also found that EPA and the Task Force agencies had not fully established a plan to guide an adaptive management process for the GLRI that could allow them to assess the We effectiveness of GLRI actions and, if needed, adjust their efforts.recommended, among other things, that the EPA Administrator, in coordination with the Task Force, address how factors outside the scope of the Action Plan that may limit progress, such as the effects of climate change, may affect GLRI efforts to restore the Great Lakes, and establish an adaptive management plan. EPA generally agreed with our conclusions and recommendations. In September 2014, EPA and the Task Force issued the 2015-2019 Action Plan, which includes ensuring climate resiliency of GLRI-funded projects as an objective in one of its focus areas. As of March 2015, EPA and the Task Force were in the process of revising a draft of an adaptive management framework for the 2015-2019 Action Plan. In fiscal years 2010 through 2014, $1.68 billion of federal funds was made available for the GLRI, and as of January 2015, EPA had allocated nearly all of the $1.68 billion, and the Task Force agencies had expended $1.15 billion on 2,123 GLRI projects. The five agencies we reviewed in greater detail had expended $993 million of the $1.43 billion allocated to them in fiscal years 2010 through 2014 on 1,696 GLRI projects, as of January 2015, and conducted those projects through a combination of work done by agency staff and a variety of GLRI funding recipients. Of the $1.68 billion made available for the GLRI in fiscal years 2010 through 2014, EPA had allocated $1.66 billion as of January 2015. EPA conducts and funds GLRI work itself and allocates GLRI funds to the other Task Force agencies responsible for carrying out GLRI work. As of January 2015, the Task Force agencies had obligated $1.61 billion and expended $1.15 billion, or about 68 percent of the funds made available for the GLRI in fiscal years 2010 through 2014, on 2,123 projects. Figure 3 shows the funds made available for the GLRI in fiscal years 2010 through 2014 and the extent to which they had been allocated, obligated, and expended by all Task Force agencies as of January 2015. The Task Force agencies have not expended all of the funds made available for the GLRI for several reasons, chief among them being that many projects take several years to complete. Also, GLRI funds are available for obligation for the fiscal year the appropriation was made, and the successive fiscal year. After these 2 fiscal years of availability, GLRI funds can be used for 7 additional years to liquidate and adjust those obligations. In addition, final payments are made from the agencies to recipients after projects are completed. Furthermore, as we found in September 2013, weather events, among other things, caused some GLRI projects to be completed later than planned. In addition to the GLRI, federal agencies have expended other funds on Great Lakes restoration activities, such as reducing atmospheric deposition and controlling the generation, transportation, storage, and disposal of hazardous wastes. GLRI funds allocated, obligated, and expended, data on other funds received, obligated, and expended by federal agencies for Great Lakes restoration activities are not easily available for comparison. Specifically, OMB’s budget crosscut reports have not identified federal agencies’ obligations and expenditures for Great Lakes restoration activities, as required by several appropriations laws since fiscal year 2008. Most recently, the Consolidated Appropriations Act for Fiscal Year 2014 required OMB to identify, among other things, (1) all funds received and obligated by all federal agencies for Great Lakes restoration activities during the current and previous fiscal years and (2) all federal government expenditures in each of the 5 prior fiscal years for these activities. Instead, the reports presented information on each agency’s budget authority for these activities. According to OMB staff, the budget crosscut reports did not report these obligations and expenditures because providing that information is labor-intensive and time-consuming. These staff also said that the information would be outdated and of little value by the time it would be released. Atmospheric deposition is a process that transfers pollutants from the air to the earth’s surface and can significantly impair water quality in the nation’s rivers, lakes, bays, and estuaries, and harm human health and aquatic ecosystems. Hazardous waste is most often a by-product of manufacturing and can threaten human and ecosystem health when released into the air, water, or land. congressional decision makers. Without this information in OMB’s budget crosscut reports, which is required to be included by law, it is not possible for decision makers to view GLRI funding in the context of the funding of overall Great Lakes restoration activities, because information on such activities would only be available from each agency, making less information readily available for congressional oversight. Of the $1.66 billion EPA allocated to all Task Force agencies, as of January 2015, the five Task Force agencies we reviewed were allocated $1.43 billion. These agencies had obligated $1.38 billion and expended $993 million, or about 69 percent of their allocations (see fig. 4), on 1,696 GLRI projects. Using information from EPA’s GLAS database as of July 2014 for GLRI funds made available in fiscal years 2010 through 2013, we found that the five Task Force agencies we reviewed funded a total of 1,558 GLRI projects using GLRI funds as of July 2014. As shown in table 2, EPA and the Fish and Wildlife Service funded the most projects as of July 2014. To use GLRI funds on restoration activities, the Task Force agencies conduct the work themselves or enter into financial agreements with other entities to conduct the work, primarily through grants, cooperative agreements, or contracts. The different types of financial agreements have different purposes. For example, EPA officials noted that the distinguishing factor between a grant and a cooperative agreement is the degree of federal involvement in project activities. A single GLRI project in GLAS can involve agency-conducted work, one or more of the types of financial agreements, or a combination of these. Using data we obtained from the five agencies reviewed, we found that the extent to which the agencies used each type of financial agreement in obligating their GLRI funds made available in fiscal years 2010 through 2013 varies by agency (see fig. 5). For example, the Corps primarily used contracts, and NOAA primarily used grants and cooperative agreements. NRCS used financial assistance contracts with agricultural producers to carry out conservation practices on their land. GLRI projects in GLAS can have multiple recipients that received GLRI funds directly from the Task Force agencies. These recipients include federal entities; state, local, or tribal entities; nongovernmental organizations; academic institutions; and others, such as agricultural producers and private landowners. In addition, a recipient may award a portion of its funds to subrecipients, such as universities, to help carry out the work, which means that a single GLRI project may also have multiple subrecipients. Figure 6 shows an example of the distribution of funds for a 2011 GLRI project with multiple funding recipients and subrecipients. Table 3 shows the number of GLRI projects funded with GLRI funds made available in fiscal years 2010 through 2013 by the five agencies by type of recipient as of July 2014. The type of GLRI recipients vary depending on the agency and financial agreements involved. For example, NOAA has entered into agreements with all of these recipient types, with the exception of private landowners and agricultural producers, and the Corps has conducted all of its work itself or through contracts. The Task Force process for identifying GLRI work and funding generally includes four steps and has evolved from an agency-by-agency process to one that emphasizes interagency discussion. This evolution began in fiscal year 2012 when the Task Force created subgroups to identify and fund work to address three priority issues: (1) cleaning up and delisting Areas of Concern, (2) preventing and controlling invasive species, and (3) reducing phosphorus runoff that contributes to harmful algal blooms. For fiscal year 2015, the Task Force created additional subgroups to discuss and agree on work for other areas. EPA officials told us that funding work for the three priority issues has led to some accelerated restoration results. EPA officials described four steps that Task Force agencies generally followed to identify GLRI work and funding, and the five agencies we reviewed followed these steps. The steps are: (1) agency identification of GLRI work; (2) Task Force agreement on scope and funding for agencies’ work; (3) solicitation of proposals for projects designed to carry out agencies’ GLRI work, if the work was to be conducted by entities other than the agencies; and (4) selection of projects. EPA officials told us that the first step generally occurred 2 years before the fiscal year in which the work was to be carried out, in order to coincide with the federal budget cycle. During that step, the officials told us that the agencies each did an internal analysis to identify GLRI work that they wanted to conduct in that fiscal year. For example, FWS officials told us that the agency’s regional officials coordinated to identify new work that the agency planned to do in order to achieve its goals and then compared this work with 2010-2014 Action Plan goals to identify those projects that The Corps’ approach to this step was different; also met the goals.according to Corps officials, they selected projects that were already planned and ready to be conducted, and that were compatible with the 2010-2014 Action Plan. At this point, agency officials also identified the type of financial agreements they were likely to use to conduct the work or whether the agency would conduct the work itself. For the second step, the five agencies we reviewed held discussions with the Task Force and agreed on the work that would be done in a given fiscal year, as well as the amount of GLRI funds that would be needed to conduct that work. In general, once the agencies made a final determination of the work they would do in a fiscal year, and the GLRI funds that would be made available, each agency entered into an interagency agreement with EPA to transfer GLRI funds from EPA to the appropriate agency. The interagency agreements we reviewed included the following two parts: a form that identified the amount to be transferred from EPA to the agency that was responsible for the work, signed by both agencies;and a scope-of-work organized into discrete topics called templates that typically included a description of the work, the GLRI Action Plan goals, objectives, or measures of progress that the work would achieve, and the amount of GLRI funds to be used. EPA officials told us that the Task Force agencies were expected to spend their funds as detailed in their interagency agreement, but they could amend it with EPA approval to, for example, increase the amount of funds to be transferred to an agency or revise the scope of work. GLRI Templates Great Lakes Restoration Initiative (GLRI) templates address Action Plan focus areas, and can describe work that would be conducted through multiple projects, or through a specific, individual project. An example of a template that describes work that would be conducted through multiple projects is a Natural Resources Conservation Service (NRCS) template that addresses the nearshore health and nonpoint source pollution focus area. According to the template, NRCS would provide agricultural producers with GLRI funds and technical assistance to implement conservation practices to contribute to the 2010-2014 Action Plan goal of significantly reducing soil erosion and sediment, nutrients, and pollutants flowing into tributaries. An example of a project-specific template is a U.S. Army Corps of Engineers template to complete the design, and initiate construction, of a facility to manage dredged sediments in Green Bay Harbor, Wisconsin. The project is intended to hold 2.35 million cubic yards of sediments, and restore a chain of islands and more than 1,200 acres of coastal wetland habitat. applications would use to rank applications and select projects.officials told us that applicants may be asked to provide funds to the project. The fourth step in identifying GLRI work and funding was the selection of specific projects. Generally, officials from the selected agencies described similar processes for evaluating project proposals that were submitted in response to requests for applications. Specifically, they said that agency officials with the appropriate expertise reviewed and ranked the submitted proposals against information in the request for applications and selected the best scoring projects for funding. At the Corps and NOAA, officials said they evaluated contract bids or proposals, and awarded the contract to the vendor with a bid or proposal representing the best value to the government. Of the 19 projects we reviewed for which funds were made available for the GLRI in fiscal years 2010 through 2012 and that addressed each of the five focus areas in the 2010-2014 Action Plan, 11 were executed through grants, 2 were executed through cooperative agreements, 3 were executed by a Task Force agency, 2 were conducted through contracts, and 1 was executed through a financial assistance contract. One project addressed the toxic substances and Areas of Concern focus area; 5 addressed the invasive species focus area; 3 addressed the nearshore health and nonpoint source pollution focus area; 5 addressed the habitat and wildlife protection and restoration focus area; and 5 addressed the accountability, education, monitoring, evaluation, communication, and partnerships focus area. In addition, the recipients conducting the 19 projects included 8 federal entities; 4 state, local, or tribal entities; 4 academic institutions; and 3 nongovernmental organizations. We found that the solicitations for 11 of the 19 projects reflected the descriptions of work in the related templates. The 8 remaining projects were not solicited because 4 were conducted by the agency, 2 were not competitively awarded, 1 project had been ongoing since before the GLRI, and the recipient was identified in the interagency agreement, and 1 project was conducted by a recipient that had been selected prior to the GLRI as one of a few with the specific skills required for the project. Appendix II shows the relevant templates and solicitations for each of the 19 projects, as well as information from agency officials about why each of the projects was selected. The process for identifying each agency’s GLRI work and share of GLRI funding has evolved over the life of the GLRI. According to EPA officials, for fiscal years 2010 and 2011, the Task Force determined the work an agency would do on an agency-by-agency basis. Beginning with fiscal year 2012, the process began emphasizing interagency discussion as the Task Force created three subgroups with federal agency members, one for each of three priority issues. The three priority issues, which aligned with three of the five focus areas in the 2010-2014 Action Plan, were (1) cleaning up and delisting Areas of Concern located entirely or partially in the United States, (2) preventing and controlling invasive species, and (3) For reducing phosphorus runoff that contributes to harmful algal blooms.fiscal year 2015, EPA officials said that the Task Force agencies had begun creating additional subgroups to discuss and agree on scope and funding for agencies’ GLRI work. For fiscal years 2010 and 2011, the Task Force and the five agencies agreed on work that each agency would do on an agency-by-agency basis. Officials from the agencies said that they identified work from their existing plans and interacted with the Task Force to determine the work the agencies would do and the funds the agencies’ should receive. Because the program began in fiscal year 2010, this process did not take place 2 years in advance, as it would in subsequent years. EPA officials told us that in 2010 the agencies also began agreeing on work for fiscal year 2011. After Congress made funds available for the GLRI for fiscal year 2010, and again after fiscal year 2011, the Task Force revisited the initial agreements made with each agency to finalize the funding amounts. In agreeing on GLRI work and funding for fiscal years 2012 through 2014, the Task Force created a subgroup for each of the three priority issues and set aside a total of about $180 million to pay for work to address these issues. The Task Force created subgroups staffed by officials from relevant Task Force agencies to discuss and agree on the scope and funding for agencies’ work to address the three priority issues. Specifically, officials from EPA, FWS, NOAA, the Corps, and the U.S. Geological Survey participated in the cleaning up and delisting of Areas of Concern and the invasive species prevention subgroups. Officials from EPA, NRCS, NOAA, the Corps, and the U.S. Geological Survey participated in the phosphorous reduction priority issue subgroup. On Concern to be targeted for accelerated cleanup in fiscal year 2012: the Ashtabula River Area of Concern in Ohio, the River Raisin Area of Concern in Michigan, the Sheboygan River Area of Concern in Wisconsin, and the White Lake Area of Concern in Michigan. At the same time, the subgroup identified additional Areas of Concern to be The subgroup addressed in future years using the same approach.determined that nearly $22 million should be set aside for this priority issue in fiscal year 2012 and increased that amount to about $31 million for fiscal years 2013 and 2014. Invasive species prevention subgroup: Building on work done by the Asian Carp Regional Coordinating Committee that began around the same time as the GLRI, the subgroup originally focused most of its efforts on identifying projects to prevent Asian carp from getting into and becoming established in the Great Lakes. These projects included developing early detection and monitoring, and tools and technology to discover whether Asian carp were already present in the Great Lakes Basin. The subgroup agreed to adopt the amount of funds, $19.5 million, in fiscal year 2012, based on estimates made by the Asian Carp Regional Coordinating Committee. In fiscal year 2013, the Coordinating Committee reduced the amount it estimated was needed for invasive species work in the Great Lakes Basin to $16 million. The subgroup agreed to continue funding this priority issue at $19.5 million in fiscal years 2013 and 2014, but it divided the funds into $16 million for Asian carp work and $3.5 million for other invasive species, such as phragmites and feral hogs. The subgroup used the Asian Carp Control Strategy Framework to guide the amount of GLRI funds that should be provided to each of the Task Force agencies with responsibility for conducting work to address this priority issue. Phragmites australis, or common reed, is a perennial grass now common in North American wetlands. Invasive phragmites create tall, dense stands that degrade wetlands and coastal areas by crowding out native plants and animals, blocking shoreline views, and reducing access for swimming, fishing, and hunting. Feral hogs are domestic hogs that have either escaped or been released, and they can be found in 39 states including the Great Lakes region. They cause damage to crops and habitat and can cause erosion by digging for food. They also carry diseases that threaten humans and animals. In 2014, the U.S. Department of Agriculture estimated that feral hogs caused $1.5 billion in annual damage and control costs. Phosphorous reduction subgroup: Using available models and data to identify geographic areas that were contributing more nutrients to the Great Lakes than others, the subgroup determined that priority work should be focused on three watersheds where algal blooms had occurred. The three watersheds were the Lower Fox River in Wisconsin; the Maumee River watershed in Ohio, Michigan, and Indiana; and the Saginaw River in Michigan. The subgroup agreed that $11 million should be set aside for this priority issue for fiscal year 2012, and to increase that amount to $13.1 million for fiscal year 2013, and to $14.4 million for fiscal year 2014. EPA provided the majority of funds for this priority issue to NRCS because it is the federal agency that works with agricultural producers to implement conservation practices to reduce nutrients in runoff, and Task Force agency officials determined NRCS was best suited to address nutrient reduction. EPA provided the remaining funds to the U.S. Geological Survey for monitoring projects because of its experience in monitoring water supply and water quality. To agree on GLRI work to be conducted in fiscal year 2015 and future fiscal years, EPA officials told us that the Task Force began creating additional subgroups through which Task Force agency officials would work together to identify each agency’s GLRI work and share of GLRI funding in all five of the focus areas in the 2015-2019 Action Plan, not just the three priority issues. According to EPA officials, the use of subgroups to meet and agree on work and funding created a process for conducting GLRI work that all Task Force agencies agreed needed to be done, rather than each agency identifying its own GLRI work. According to EPA officials, for fiscal year 2015, the new subgroups developed strategies for dealing with issues and then identified the work proposed by agencies that helped to achieve the overall strategies. For future fiscal years, EPA officials said that the subgroups would use the 2015-2019 Action Plan. According to EPA officials, the focus on priority issues for fiscal years 2012 through 2014 has accelerated restoration results for one of three issues. Specifically, two of the Areas of Concern targeted for accelerated cleanup by the relevant subgroup were delisted in 2014. EPA announced in October 2014 that the White Lake and Deer Lake Areas of Concern had been delisted—both had been identified by the Areas of Concern subgroup for accelerated cleanup with priority issue funds—and EPA officials told us that they expect cleanup work to be completed at four other Areas of Concern in fiscal year 2015 as a result of receiving priority issues funds. Cleanup work included removing contaminated sediment and diverting water from an underground mine. In the 25 years before the three priority issues were identified, only one Area of Concern located entirely in the United States had been delisted. EPA officials said that identifying and funding the three priority issues for fiscal years 2012 through 2014 has also allowed for continued success in invasive species prevention and resulted in some progress in reducing phosphorus runoff that contributes to harmful algal blooms. However, restoration results in those priority issues are less clear than in the Areas of Concern priority issue, in large part because the factors contributing to those priority issues persist and are likely to continue into the future. For example, dams, canals, and other structures that were created to support navigation and power production in the Great Lakes Basin also created channels that connect the Great Lakes and Mississippi River Basins. These channels are of serious concern as a potential means for Asian carp or other invasive species to enter the Great Lakes. EPA funded work on priority issues from the amounts made available for the GLRI in fiscal years 2012 through 2014, shifting funds from other GLRI work to the priority issues. EPA officials described the funds set aside for the priority issues as a realignment of GLRI funds; that is, the funds used for the priority issues were taken from the existing funds that had been made available for the GLRI. Overall, the Task Force set aside a total of $180 million for the priority issues for this period: $52.2 million of the available GLRI amounts for all priority issues in fiscal year 2012, $63.4 million in fiscal year 2013, and $64.7 million in fiscal year 2014. EPA officials told us that money designated for one priority issue would not be spent on a different priority issue or on other GLRI projects. EPA officials told us that the Task Force did not set aside all of the funds made available for the GLRI in fiscal years 2012 through 2014 for the priority issues for two key reasons. First, they said there is a limit to the amount of work that can be conducted for some restoration efforts. For example, GLRI funds for reducing agricultural runoff can only be given to recipients in the Great Lakes Basin. These recipients are typically landowners, and there is a finite number of landowners in the Great Lakes Basin interested in conducting GLRI work who also have suitable land and ready projects. In addition, EPA officials told us that NRCS is the only Task Force agency equipped to oversee phosphorous reduction work targeted in agricultural areas, and the agency has a fixed number of personnel that it can use to oversee GLRI work. Second, according to these officials, Great Lakes restoration needs to involve topics addressed by the 2010-2014 Action Plan that are not part of the three priority issues, as well as addressing the overall health of the Great Lakes ecosystem. The Task Force has made some information about GLRI projects, including project activities and results, available to Congress and the public in three accomplishment reports and the GLRI website. Specifically, the GLRI accomplishment reports contain information on activities and results for some projects. In addition, the individual Task Force agencies collect information on activities and results from recipients, although this information is not collected and reported by EPA. We obtained information on activities and results for the sample of 19 projects we reviewed. While EPA collected project information in GLAS from 2010 through May 2015, some GLAS data were inaccurate, in part because recipients entered information inconsistently due to issues such as inconsistent interpretation of guidance, unclear guidance, or data entry errors. As part of oversight of the GLRI, the Task Force makes some information on projects available for Congress and the public in two ways, annual accomplishment reports and the GLRI website. EPA and the Task Force published two accomplishment reports in 2013 and one in 2014 that provided overviews of progress under the GLRI for fiscal years 2010 through 2012. These reports included summary accomplishment statements for each of the five focus areas from the 2010-2014 Action Plan, as well as specific performance information for many of the 28 measures of progress in the 2010-2014 Action Plan. The accomplishment reports included some information about project activities and results. Specifically, our analysis found that GLRI accomplishment report for progress in fiscal year 2011 identified 10 GLRI projects, 2 for each of the five focus areas in the 2010-2014 Action Plan, and it included some information about project activities and results for each project. For example, it noted that the “Milwaukee River (Wisconsin)—restoring fish passage” project removed a dam, opening 14 miles of the river and 13.5 miles of tributaries to allow fish to move more freely, and reconnected the lower reach of the river with 8,300 acres of wetlands, improving water quality. The accomplishment report provided similar information about nine additional projects. The accomplishment reports about GLRI progress in fiscal years 2010 and 2012 also included information about project activities and results, although most were not associated with individual projects. EPA also made some of the GLRI project information that recipients reported in GLAS available on the GLRI website, including a project’s funding agency, title, funding amount and year, recipient identification, focus area, and description. Project information available on the website does not include GLRI project activities and results, although it is not designed to do so. EPA updated the GLRI project information on the website twice a year by asking the other Task Force agencies to update and verify GLAS information about their projects. To compile project information for the website, EPA provided each Task Force agency with a spreadsheet containing certain GLAS data for each of that agency’s projects so that the agency could update and verify that information before it was posted on the website. The information on the website about projects is limited to basic information for the public, according to an EPA official, and does not contain certain information on projects such as activities and results. Each of the five Task Force agencies we reviewed collected information on its projects, including project activities and results, and we reviewed the sample of 19 GLRI projects from the five Task Force agencies to identify information on project activities and results for each of the projects. We found that each of the five Task Force agencies collected this and other project information by establishing reporting requirements in grants, cooperative agreements, and contracts for recipients. Specifically, in most cases, EPA, FWS, NOAA, and NRCS required their grant recipients to submit quarterly, semiannual, or annual progress reports, and quarterly or annual financial reports, consistent with the OMB circulars in effect at the time of the agreements. In addition, the Task Force agencies that used contracts—the Corps and NOAA—required their contractors to submit progress reports. The Corps required the contractor to submit daily activity reports, and NOAA required the contractor to provide monthly progress reports. EPA officials told us that this information on project activities and results was not required to be reported in GLAS. In addition, the officials said that GLAS was not designed to collect specific information on project activities and results and was adapted from a system they used to collect information on a different restoration program. Appendix III contains a summary of the detailed information we collected on activities and results for the 19 projects. Overall, recipients reported a variety of project activities, including applying herbicide, conducting training and workshops, and collecting data. In addition, we found that recipients reported a range of results. For example, recipients from eight projects reported results that can be directly linked to restoration, such as increasing lake trout production, removing acres of invasive plant species, and protecting acres of marshland. For one of these projects, the Buffalo Audubon Society reported results needed to restore critical bird habitat, such as planting 3,204 plants and removing invasive species, among other results. For another project, the Great Lakes Fishery Commission reported results in the form of improved methods for capturing sea lamprey. According to a Great Lakes Fishery Commission official, the results from this project will help to further suppress sea lamprey production in the Great Lakes thereby reducing the damage they cause to native and desirable species. For example, a single lamprey can kill up to about 40 pounds of fish in its lifetime. Recipients for the 11 remaining projects reported results that can be indirectly linked to restoration; that is, the results may contribute to restoration over time. These included results such as simulations and data for helping decision makers make better restoration decisions in light of climate change, and education and outreach tools to increase awareness of invasive species. In addition, a University of Wisconsin- Madison representative told us that the University’s project to improve applied environmental literacy, outreach, and action in Great Lakes schools and communities, has already contributed to restoration. Some of the University’s progress reports noted that the project has already resulted in more than 110 school teams that guided students in restoration, service-learning, inquiry, and citizen science monitoring during the 2013-2014 school year, among other things. The representative told us that this contributed to restoration because participating students have built rain gardens and implemented other conservation practices. Similarly, the Corps used GLRI funds to complete a feasibility study in Highland Park, Illinois, and the study led to a restoration project that is expected to restore and enhance 4 acres of coastal habitat along the Lake Michigan shoreline, among other things. Figure 7 is a photograph of the Corps restoration project to restore and enhance coastal habitat that began with the feasibility study. See appendix III for examples of activities and results from each of the 19 projects we reviewed. EPA collected some project information in GLAS, which the agency created to collect information to monitor and report on GLRI progress in response to the conference report accompanying the fiscal year 2010 appropriation act that made funds available for the GLRI. However, our review found that some of the data collected in GLAS were inaccurate and therefore may not be sufficiently reliable to monitor and report project progress. For example, GLAS collected project information in more than 20 data fields, including the project’s title, funding amount, funding year, funding agency, recipient, focus area, state, end date, status, and related Area of Concern and watershed. We selected six data fields that could contribute to our understanding of projects and assessed their reliability. Specifically, we reviewed the GLAS data fields for funding year, funding agency, recipient, status, end date, and funding amount. For each of the six fields, we reviewed field definitions and data entry procedures, and we manually checked data entries. We found that the funding year and funding agency data fields were sufficiently reliable, that is, accurate and complete, for the purposes of monitoring and reporting on the progress of GLRI projects. However, we found that the other four data fields were not sufficiently reliable for that purpose. The results of our analysis are as follows: Recipients. GLAS data on project funding recipients, which EPA’s GLAS User Guide defined as the organizations that actually conducted the work, were inconsistent. For the 1,558 projects funded by the five agencies we reviewed, we compared the recipients that were identified in GLAS with data obtained from the agencies on recipients that had received GLRI funds for these projects directly from the agencies. We found that GLAS users did not identify recipients in GLAS consistently. Specifically, three of the agencies sometimes or always identified only the agency as the recipient in GLAS, even if the agency awarded the funds for that project to other entities that conducted the work. For example, one agency identified itself as the funding recipient for 118 projects in GLAS, but data we obtained from the agency identified other entities as the recipients for most, or 95, of those projects. Similarly, another agency identified itself as the funding recipient for 311 projects in GLAS, but data we obtained from the agency identified other entities as the recipients for almost half, or 151, of those projects. In addition, a third agency identified itself as the recipient for all 26 of the agency’s GLRI projects in GLAS. While it is the case that some of the agency’s recipients are private citizens, whose identities the agency does not want to release, the agency awarded funds to recipients other than private citizens for 18 of its projects. Project status. GLAS users did not define status the same way and therefore may have entered the status of their projects inconsistently. To report a project’s status, GLAS users selected from a drop-down list of options, including started, percentage completed, and completed. We asked officials at four of the five agencies we reviewed how they defined “completed” and found that the agencies did not mean the same thing when selecting completed. For example, one agency official told us that for projects involving construction, completed means that the bulk of the contractor’s effort was completed and that the ecological benefits of the project were at least partially realized, even if additional project activities and final payments may have not been completed. Officials from another agency told us that completed means that all of the funds for the project were obligated and expended, or all contracts were completed, cancelled, or terminated. EPA officials told us that many recipients did not report projects as completed until the grant itself was closed out, which can take as long as a year from the completion of fieldwork. With agencies using different definitions, it is not clear what the GLAS data represented for those projects identified as completed. For example, GLAS users could have selected completed for their projects when the project work was finished, when all the funds had been expended, or when the financial agreement was closed out. As a result, GLAS data cannot be used to reliably determine how many GLRI projects have been completed. Project end date. Although not a required data field in GLAS, most projects (more than 75 percent) in GLAS had an end date listed. However, some GLAS data on the project end dates were inconsistent with project status reported in GLAS. We analyzed the end dates in GLAS for 1,890 projects as of July 2014 by checking for errors and by comparing the end dates with the projects’ status.Through this analysis, we found that of the 799 projects identified in GLAS as completed, 14 percent (112) had end dates that had not yet been reached. In addition, 698 projects had end dates that had already passed, but 28 percent of those (194) had not been identified in GLAS as completed. As a result, GLAS data on the end dates of projects are unreliable and cannot be used to determine the number of projects that were completed or are expected to be completed by a certain date. GLRI funding amounts. Some GLAS data on the GLRI funding amounts for projects were inaccurate. Specifically, after reviewing the GLAS data we provided on funding amounts for 1,558 projects, four of the five agencies identified inaccuracies in the GLRI funding amounts that the agencies or their recipients had reported in GLAS. For example, the funding amount for one project in GLAS was $8.3 million less than the actual funding amount, which agency officials attributed to a data entry error. Similarly, officials from a second agency identified a project for which the funding amount in GLAS was about $219,000 more than the actual funding amount and told us that the reason for the error was unknown. Officials from a third agency also identified projects for which they said the agency had entered incorrect funding amounts, including 11 projects for which the GLAS data overreported the funding by $523,000. And, officials from a fourth agency identified 19 projects for which the funding amounts the agency had reported in GLAS were incorrect in part because of data entry errors, but they did not identify the dollar amount of the errors. Although we cannot extrapolate these examples of errors in GLAS on project funding to the 11 other Task Force agencies, the amount of these errors raises concerns about the accuracy of GLAS data on GLRI funds. Some of the errors we found in GLAS data may have been the result of agencies’ different interpretations of guidance or unclear guidance. Specifically, EPA’s GLAS User Guide was the formal guidance document that defined GLAS data fields, such as recipients, project status, and end dates, but EPA left it up to the Task Force agencies to decide how to enter the data. For example, according to an EPA official, the GLAS data identifying recipients used the lead organizations entered by GLAS users. The GLAS User Guide defined lead organization as the organization that actually conducted the project. However, in practice, the Task Force agencies varied regarding which entity they identified as the recipient, the funding agency or the organization conducting the project. In addition, the GLAS User Guide did not provide clear guidance. For example, EPA required that GLAS users report project status in GLAS, but the GLAS User Guide did not specify how users should choose a project’s status from the drop-down menu and did not define available options. Under the federal standards for internal control, agencies are to clearly document internal controls, and the documentation is to appear in management directives, administrative policies, or operating manuals. Similarly, although it was not required, the guide did not specify how users should determine what the end date is when they did enter it. Without specifying this, GLAS users may have entered information in the end date field inconsistently. For example, we found that some projects had a completed status but had not reached their reported end dates, and others had end dates that had already passed but did not have a completed status. Specifying in the guide how to determine the end date would have been consistent with federal standards for internal control that call for clearly documenting internal controls. According to EPA officials, the GLAS User Guide did not specify how GLAS users should determine a project’s end date because the officials thought this data field was intuitive. Because the GLAS User guide did not require GLAS users to enter end dates for all projects, however, EPA may not have complete information on GLRI projects in GLAS. According to our February 2009 guide on assessing the reliability of computer-processed data,reliable when they are accurate and complete. In May 2015, when EPA stopped using GLAS and began using the Environmental Accomplishments in the Great Lakes (EAGL) information system to collect GLRI project information, the agency issued initial guidance that included definitions of the data fields in the system. For example, the guidance defines recipient name as the organization actually doing the work, and project end date as the date that the project ended or is planned to end; the data field lead organization is no longer included. We reviewed the guidance and determined that the definitions provided were clear and could be used to enter data consistently. In addition, we found that the guidance clearly identifies those data fields that are required, including project end date. However, while the guidance specifies that users should select one project status option from the drop down list in the system, it does not identify or define the available options. Other errors that agencies identified in their GLAS data, such as in the GLRI funding amounts data, arose from data entry errors or lags in data updates, according to officials from some of the Task Force agencies we reviewed. Some of these inaccuracies could have been caught through data quality controls or other edit checks, but our analysis found that EPA did not have controls for GLAS to prevent such errors. Under the federal standards for internal control, agencies are to implement control activities, such as verifications and reconciliations, which can be computerized or manual, and document internal controls, such as documenting procedures on how such verifications are to be implemented (e.g., who is to conduct periodic reviews of the completeness and accuracy—that is, reliability—of data). Of the five agencies we reviewed, EPA officials told us that they reviewed their own agency data and relied on the four other Task Force agencies to use their own processes to ensure that the data they or their recipients entered in GLAS are reliable. Of the four other agencies, three did not identify processes they used to ensure the reliability of data that they or their recipients entered in GLAS. Officials from the fourth agency told us that their agency reviewed its GLAS entries annually by comparing a spreadsheet of GLAS data provided by EPA with its own programmatic reports and reports from its financial system. Even with its review process, in January 2015, that agency identified errors in its GLAS data for nearly 20 percent of its fiscal year 2010 through fiscal year 2012 GLRI projects. Most were errors in the funding amounts entered by the agency, which agency officials attributed to data entry errors and changes that had not been updated in GLAS. Similarly, officials from one of the other agencies noted that, even when they found errors, certain data fields, including GLRI funding amounts, could not be edited by the agencies and that the agencies had to contact EPA to make corrections. Without control activities, such as some form of verification, data errors are likely to continue, making the data collected into the system used to collect GLRI project information insufficiently reliable to ensure monitoring and reporting on GLRI progress as directed in the conference report. In commenting on a draft of this report, EPA stated that it plans to establish data control activities, such as verifications and documented procedures, for ensuring the reliability of the EAGL information system. In discussing these comments, EPA officials told us that the most important difference between GLAS and EAGL is that EAGL limits data entry to Task Force agency officials. The officials did not have a time frame for establishing data control activities, and told us that they wanted the Task Force agencies to become comfortable using the new system first. Until EPA and the Task Force agencies make a decision about the data system and the agency fully implements the actions needed to address the reliability of GLRI project data, EPA and the Task Force agencies cannot have confidence that EAGL can provide consistent, accurate, and complete information. Thus, we urge EPA to implement these actions as quickly as possible. EPA officials told us that, in 2012, they began to review GLAS and to consider whether to upgrade GLAS to improve it or develop a new system. This review included identifying potential improvements and considering whether GLAS is the right tool for monitoring and reporting on the GLRI. The Task Force also convened a subgroup of Task Force agency officials to determine what the next version of GLAS should be. One concern EPA officials expressed about this decision was the cost to create a new system to collect detailed data, and they noted that they are hesitant to make that investment in the face of uncertainty over whether the GLRI will continue to be funded from year to year. EPA officials told us that the agency created EAGL in February 2015 and, after consulting with the Task Force agencies, conducted pilot tests of the system for a few months, while we were completing our work. After this testing, in May 2015, EPA officials decided to use EAGL to collect information to monitor and report on GLRI progress, and they made the system available to Task Force agencies for an initial period of data entry. Specifically, EPA officials transferred key project information from GLAS into EAGL and asked the Task Force agencies to enter new project information and update existing information. According to EPA officials, EAGL will improve the consistency and completeness of information about GLRI projects. EPA officials told us that the agency plans to use this initial period of data entry to get feedback from the Task Force agencies and to make changes to EAGL and the draft data entry guidance to address any problems and refine definitions. The EPA officials said their goal is to have EAGL ready for data entry at the beginning of fiscal year 2016. The United States has committed enormous resources to help restore the health of the Great Lakes ecosystem, a region that is vital to the United States both economically and socially, with some progress. Nonetheless, Great Lakes restoration remains an ongoing, long-term effort. To gauge progress toward restoration, EPA and the Task Force agencies have established measures of progress for the GLRI and collected information in GLAS to report on progress. EPA and the Task Force agencies have proceeded carefully over the last 2 years as they have evaluated how best to collect and report GLRI data. In May 2015, while we were completing our work, EPA replaced GLAS with a new system to collect GLRI project information and issued guidance that included definitions of data fields and identified which data fields are required. This is a good first step to resolving the data inconsistencies that we identified in GLAS, which resulted, in part, because of unclear or undocumented definitions, data requirements, and guidance about entering important data. However, EPA has not yet established data control activities or other edit checks, although in commenting on a draft of this report, EPA stated that it plans to establish data control activities, such as verifications and documented procedures, for ensuring the reliability of the EAGL information system. Fully implementing the actions needed to address the reliability of GLRI project data should ensure that EPA and the Task Force agencies can have confidence that EAGL can provide complete and accurate information. Federal agencies have expended funds for Great Lakes restoration activities other than what has been made available for the GLRI. However, OMB has not reported on all federal obligations and expenditures for these activities as required by law. Without this information, the information available for congressional oversight and decisions on future funding levels has been limited to funds made available. To better ensure that complete information is available to Congress and the public about federal funding and spending for Great Lakes restoration over time, we recommend that the Director of OMB ensure that OMB includes all federal expenditures for Great Lakes restoration activities for each of the 5 prior fiscal years and obligations during the current and previous fiscal years in its budget crosscut reports, as required by Pub. L. No. 113-76 (2014). We provided a draft of this report to EPA, the Departments of Agriculture, Commerce, Defense, and the Interior, and OMB for review and comment. In written comments from the EPA Region 5 Administrator, which are reproduced in appendix VI, EPA generally agreed with the recommendations in our draft report and noted that the agency had already taken action consistent with the recommendations. In particular, for a recommendation in our draft report that EPA determine whether the agency should continue using GLAS or acquire a different system to collect information to monitor and report on GLRI progress, EPA stated in its written comments that GLAS is no longer in use and has been replaced by EAGL. We interviewed EPA officials about EAGL and its status, as well as plans for implementing it, and determined that the agency has made a final decision and taken appropriate actions to adopt it. As a result, we removed the recommendation from the report. We also added information about EAGL in the report. In addition to replacing GLAS with EAGL, EPA noted that the agency has taken action to address three recommendations we made about ensuring data reliability in our draft report. First, for a recommendation that EPA should ensure that GLAS or another system requires important data to be entered, according to EPA, EAGL will require important information, including project end date, to be entered by the Task Force agencies. Second, for a recommendation that GLAS or another system documents definitions and guidance for entering data into the system, the agency in its written comments stated that it has developed an initial guidance document for data entry that it is revising based on the initial round of data entry into EAGL. We reviewed the initial guidance and determined that it clearly identifies those data fields that are required and that the definitions provided were clear and could be used to enter data consistently. As a result, we removed these recommendations from our report. Third, for a recommendation that EPA should ensure that GLAS or another system establishes data quality control activities, such as verifications and documented procedures for ensuring system reliability, EPA stated that it will establish data quality control activities such as verifications and documented procedures for ensuring the reliability of the EAGL information system. Although EPA officials did not have a timeframe for establishing data quality control activities, the agency has limited data entry to Task Force agency officials, and we believe the actions already taken constitute important steps toward enhancing GLRI oversight. As a result, we removed the recommendation from the report. We look forward to seeing the agency take this final action. However, until it is fully implemented, the agency cannot have confidence that the data produced by EAGL will address the inconsistencies that we identified in GLAS or that they are complete and accurate. Thus, we urge EPA to finish implementing these actions as quickly as possible. In oral comments, OMB staff disagreed with the recommendation that OMB include all federal expenditures for Great Lakes restoration activities for each of the 5 prior fiscal years and obligations during the current and previous fiscal years in its budget crosscut reports, as required by Pub. L. No. 113-76 (2014). OMB staff restated the position that including the required expenditures and obligations information in the budget crosscut reports would not yield sufficient information to justify the cost of including that information. They added that there is no evidence that this information would be used for congressional oversight. Nevertheless, the law requires OMB to identify, among other things, all funds received and obligated by all federal agencies for Great Lakes restoration activities during the current and previous fiscal years and all federal government expenditures in each of the 5 prior fiscal years for these activities, and OMB should comply with the law. The Departments of Defense and the Interior responded that they did not have comments on the draft report. In addition to these written and oral comments, EPA, NOAA, and NRCS provided technical comments that we incorporated as appropriate. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 9 days from the report date. At that time, we will send copies to the appropriate congressional committees, the Director of OMB; the Administrator of EPA; the Secretaries of Agriculture, Commerce, Defense, and the Interior; and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-3841 or gomezj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix VI. This appendix provides information on the objectives, scope, and methodology for the report. We examined the (1) amount of federal funds made available for the Great Lakes Restoration Initiative (GLRI) and expended for projects; (2) process the Great Lakes Interagency Task Force (Task Force) used to identify GLRI work and funding; and (3) information available about GLRI project activities and results. To examine the amount of federal funds made available and expended for GLRI projects, we analyzed the Environmental Protection Agency’s (EPA) January 2015 GLRI financial management update reports for GLRI funds made available in fiscal years 2010 through 2014. We reviewed relevant EPA documents and interviewed EPA officials about the data input and review for the GLRI financial management update and, based on this work, determined that it was reliable for our purposes. In addition, to provide context for how funds for GLRI projects compared with funds made available for other federal Great Lakes restoration activities, we analyzed the Office of Management and Budget’s (OMB) Great Lakes Restoration Crosscut Reports to Congress for 2008 through 2012 and 2014 and the applicable appropriations laws requiring OMB to produce these reports. We also interviewed OMB staff to obtain information about the crosscut reports. We then selected five Task Force agencies to review in greater detail because they had received the majority (about 85 percent) of GLRI funds The five agencies we made available in fiscal years 2010 through 2014.selected were: EPA, U.S. Army Corps of Engineers (Corps), Fish and Wildlife Service (FWS), Natural Resources Conservation Service (NRCS), and National Oceanic and Atmospheric Administration (NOAA). We obtained data from EPA’s Great Lakes Accountability System (GLAS) as of July 2014 to identify the projects funded by the five Task Force agencies with amounts made available for the GLRI in fiscal years 2010 through 2013. We did not include fiscal year 2014 projects because most of the amount made available in that year had not been obligated as of July 2014. We assessed the reliability of the GLAS data on funding agency and funding year by asking the agencies to verify their projects in the system, and we believe that the data are sufficiently reliable for identifying a list and total number of projects funded by the five agencies. GLAS data included recipient but, as described below under objective 3, we do not find that this or certain other GLAS data fields are reliable for other purposes of reporting. Therefore, to identify the recipients of GLAS funding, we obtained a list of the recipients from each of the five agencies, for each of the projects in the GLAS data we obtained. We used information we obtained from the recipients, their websites, or the funding agencies to categorize each of the recipients by recipient type, using the definitions in table 4, and summarized that information. In addition, we obtained data from each of the five agencies about the types of financial agreements they used—grants, cooperative agreements, and contracts—to determine the percentage of obligations per financial agreement of amounts made available for the GLRI in fiscal years 2010 through 2013. We obtained an updated version of GLAS data, from January 2015, to identify the total number of projects reported by all Task Force agencies in GLAS. To examine the process the Task Force used to identify GLRI work and funding, we first interviewed officials from the five Task Force agencies. We used this information, in addition to our previous work on grants management, to describe the four steps that the Task Force and agencies generally use to identify GLRI work and funding. We then analyzed relevant documents to corroborate and obtain information about each of these steps. Specifically, we analyzed interagency agreements between EPA and the other Task Force agencies, including the associated scopes of work; requests for applications; project selection summaries; and agencies’ policies and guidance on managing grants, cooperative agreements, and contracts. We also reviewed EPA data on the amount of GLRI funds in fiscal years 2012 through 2014 that the agency set aside for issues identified by the Task Force as GLRI priorities to understand how the Task Force process has evolved. We then interviewed EPA officials about the process for identifying priority issue work and funding for fiscal year 2012 through fiscal year 2015. We reviewed a sample of 19 GLRI projects to understand how the process was applied to specific cases. For each project, we analyzed documents from the funding agencies and funding recipients to determine the origin of each project and why it was selected. The documents we reviewed included project solicitations, such as announcements of funding opportunities, requests for applications, or other solicitations; project proposals and applications; agency documents on why projects were selected for funding; and project financial agreements such as grant and cooperative agreement documents. We took the following steps to select the sample of 19 GLRI projects. First, we identified all projects funded by the five Task Force agencies we reviewed. To do this, we used data from GLAS to create a list of GLRI projects funded by each of the five agencies we reviewed with amounts made available for the GLRI in fiscal years 2010 through 2012. We did not review projects funded with funds made available for the GLRI in fiscal year 2013 or 2014 because those projects were likely to be in the early stages of implementation, or not yet started, at the time we began our review. Second, we categorized these projects by recipient type, using the process described above. Third, we ranked projects by agency, recipient type, and funding amount. Finally, we selected the median project for each agency and recipient type (see table 5 for those projects selected). We did this to ensure that we include projects that illustrate typical GLRI funding amounts. We selected at least one project from each of the following recipient types: federal entities; state, local, or tribal entities; nongovernmental organizations; and academic institutions. Fourth, we also selected the project with the largest amount of GLRI funds for each agency (see table 6). In the instances where the project with the largest funding amount was associated with a recipient that we had already selected, we moved to the project with the next largest funding amount with a recipient that had not already been selected. This sample of 19 projects is not representative of all GLRI projects; however, it captures both projects with typical and large funding amounts from a range of recipients. To examine the information available about GLRI project activities and results, we first analyzed the three accomplishment reports the Task Force issued to provide an overview of progress under the GLRI in each of fiscal years 2010 through 2012. We also reviewed information on projects available at the GLRI website, http://glri.us, and discussed its purpose and design with EPA officials. In addition, we obtained information on the 19 projects we selected for review to identify information available on project activities and results. We used agency documents to identify the purpose of the projects and project activities and results. Specifically, we analyzed project progress reports, and interviewed, or obtained written responses from, relevant agency officials and recipient representatives. We also interviewed recipient representatives about how the projects will contribute to the restoration of the health of the Great Lakes ecosystem, and we visited the recipients or locations for 3 of the 19 projects. We visited (1) the “Sheboygan River Area of Concern: pathway to delisting beneficial use impairments” project; (2) the “Great Lakes earth partnership” project; and (3) the “Rosewood Park, IL” project and interviewed the relevant funding agency officials and funding recipient representatives. We selected these three projects in order to observe work conducted by different recipient types that were within driving distance of the EPA Region 5 office in Chicago where the EPA officials that oversee the GLRI are located. In addition, we examined project information available for projects identified in EPA’s database, GLAS, as of July 2014. We selected 6 data fields that we could use to describe projects and that we wanted to summarize and include in our report: funding year, funding agency, status, end date, recipient, and GLRI funding amount. We selected these 6 fields out of the more than 20 data fields in GLAS because they provided basic information about how GLRI funds have been used for projects (funding agency, year, GLRI funding amount, and recipient) and information on the progress of those projects (status and end date). For example, these data fields can be used to determine first how much funding an agency provided to a recipient in a fiscal year for a project, and then the extent to which the project was completed (status) and when the project would be completed (end date). We assessed the reliability of these data using three sources of information: EPA’s GLAS User Guide to identify data field definitions and guidance for entering data; information we obtained from the five agencies to identify inaccuracies in the data, such as funding amounts, for their projects in GLAS; and the agencies’ responses to our questions about GLAS data, including their procedures for ensuring the reliability of the data and the known or potential reasons for data errors they identified. In addition, we conducted electronic testing of the GLAS data to identify missing end dates and obvious end date errors, such as a date of 1900; compared projects’ end dates to their status; and compared the recipients identified in GLAS with the recipient data we obtained from the agencies. On the basis of this work we determined that the GLAS data on status, end date, recipient, and GLRI funding amounts were not sufficiently reliable for reporting on the progress of GLRI projects. In response to EPA’s written comments on a draft of this report, we interviewed EPA officials about the Environmental Accomplishments in the Great Lakes (EAGL) information system and reviewed EAGL guidance. As part of our review of GLRI projects, we assessed how the five agencies we reviewed oversaw projects and ensured accountability for GLRI funds. First, we identified key internal controls by reviewing the Standards for Internal Control in the Federal Government (the federal standards for internal control), relevant OMB circulars in effect during the first 4 years of the GLRI, and the Federal Acquisition Regulation (FAR). We then used the following controls to analyze the agencies’ management of GLRI projects: (1) methods to assess the risks of entities applying for GLRI funds; (2) training required of officials responsible for managing financial agreements such as grants, cooperative agreements, and contracts; (3) policies governing site visits; (4) and requirements for GLRI recipients to submit financial and progress reports. Specifically, we analyzed the agencies’ policies and guidance for managing grants, cooperative agreements, and contracts, and project progress and financial reports. We also interviewed, or obtained written responses from, relevant officials for the 19 selected projects, such as agency officials or recipient representatives. In addition, we analyzed the financial reports or other information for the 19 selected projects to determine how much GLRI funds the recipients received to pay for indirect costs. We conducted this performance audit from January 2014 to July 2015 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. We analyzed the interagency agreements and project solicitations (such as requests for applications or proposals) for each of the 19 Great Lakes Restoration Initiative (GLRI) projects we reviewed, and we interviewed relevant Great Lakes Interagency Task Force (Task Force) agency officials to determine the origin of each project and why it was selected. The following tables reflect this analysis for the 19 projects we reviewed that were funded by five Task Force agencies: the Environmental Protection Agency (EPA; see table 7), Fish and Wildlife Service (FWS; see table 8), National Oceanic and Atmospheric Administration (NOAA; see table 9), Natural Resources Conservation Service (NRCS; see table 10), and U.S. Army Corps of Engineers (Corps; see table 11). EPA officials told us that reviewers consider and score the applicants’ approach on the basis of how they will achieve the desired outputs and outcomes identified in the request for application. Reviewers evaluate reasonableness, necessity, and allowability of costs when they score the budget for each application. Table 7 shows information on EPA’s selection of five GLRI projects. FWS officials told us that they assess project proposals against the request for application, which is tied to specific GLRI priorities and objectives. Table 8 shows information on FWS’s selection of five GLRI projects. For grants and cooperative agreements, NOAA assesses proposed projects through the agency’s standard merit review process. The agency’s technical and scientific merit criteria assess whether the proposed approach is technically sound or innovative, among other things. NOAA conducts a review by panel, and NOAA officials said that the agency may also conduct a secondary review through an interagency panel. Officials from the Grants Management Division told us that they work with the program offices to ensure that proposed costs are allowable, reasonable, and necessary. For contracts, NOAA uses a team of evaluators that are to assign proposals one of five ratings that consider the combined technical merits and risk of the proposal, according to the agency’s acquisition guidance. The team also evaluates the proposal’s cost or price to the government to determine if it is fair and reasonable but does not assign a rating. Table 9 shows information on NOAA’s selection of five GLRI projects. For cooperative agreements, NRCS officials said that the agency does not issue requests for applications for GLRI funding.the cooperative agreements the agency funds are typically joint efforts between NRCS and the recipient, and the technical aspects of the agreement are worked out between NRCS and the applicant prior to awarding funds. Engineers in the agency’s state offices review the technical and financial aspects of applications for funding, according to NRCS officials. For financial assistance contracts, NRCS assesses projects through its conservation planning process. Upon eligibility, a conservation planner works with individuals to identify their resource concerns and develop a conservation plan. Applications from producers for GLRI funding are then scored and ranked using what agency officials said is the same process that NRCS uses for all programs. GLRI has specific ranking questions, which the officials said are used by each state in the GLRI. According to NRCS officials, only GLRI–approved core conservation practices and supporting practices can be funded by GLRI. Table 10 shows information on NRCS’s selection of two GLRI projects. The technical features of the projects were planned and designed by the Corps. The contract for construction was awarded using plans and specifications developed by the Corps. The Rosewood Park project is under a program to develop projects meeting the objectives of existing strategic plans within the GLRI Action Plan. We examined 19 projects paid for with Great Lakes Restoration Initiative (GLRI) funds and carried out by government agencies, nongovernmental organizations, and academic institutions to identify the activities GLRI funds were spent on and the results that were achieved. To do this, we analyzed project agreements and proposals to identify the purpose of the project, progress reports to determine the activities conducted and results achieved, and financial reports and interviews to determine the amount expended for each project. We also interviewed representatives of the recipient organizations to obtain their views on how the projects will contribute to the restoration of the Great Lakes ecosystem. Table 12 reflects these topics, along with whether the project is completed or ongoing. We also included the amount of funding expended on the project, as well as the funding year to identify the specific fiscal year in which the project’s funding was made available because some projects received GLRI funding in multiple years. We examined key internal controls used by five Great Lakes Interagency Task Force (Task Force) agencies to oversee 19 projects that were conducted using Great Lakes Restoration Initiative (GLRI) funds to better understand how the agencies ensure accountability for the funds. Specifically, we reviewed relevant documents and interviewed agency officials to determine the methods the agencies used to assess the risks of organizations applying to receive GLRI funds; the training the agencies’ required of officials responsible for managing financial agreements such as grants, cooperative agreements, or contracts; the policies governing agency site visits and the number of site visits for the 19 projects; and the types of reports each agency required the funding recipients to submit. In addition, we collected at least one of each type of the required reports, when possible, to confirm that recipients had submitted these documents. The Task Force agencies we reviewed are the Environmental Protection Agency (EPA; see table 13), the Fish and Wildlife Service (FWS; see table 14), the National Oceanic and Atmospheric Administration (NOAA; see table 15), the Natural Resources Conservation Service (NRCS; see table 16), and the U.S. Army Corps of Engineers (Corps; see table 17). Based on our analysis of agency documents and interviews with agency officials, we found that, to assess applicant risk, EPA required each applicant to certify it has the legal authority to apply for federal assistance and the institutional, managerial, and financial capability (including funds to pay the nonfederal share of the project cost) to ensure proper planning, management, and completion of the project described in the relevant application. EPA officials also told us that the agency searched the names of applicants in the System for Award Management to identify any applicant debarments or suspension, performed a credit check on all applicants applying for funds, and checked for Single Audit Act findings. Single audits focus on recipients’ internal controls over financial reporting and compliance with laws and regulations governing U.S. federal awardees. They also provide key information about the federal grantee’s financial management and reporting. EPA required project officers to complete grant training to be eligible to manage an EPA grant and to take a refresher course every 3 years. For its site visits, EPA targeted a minimum of 10 percent of GLRI funding recipients for advanced monitoring—an in-depth review of the recipient’s project—which officials told us is the same percentage for all EPA grants and not just GLRI. EPA required each of its recipients to submit very similar types of reports (see table 13). Based on our analysis of agency documents and interviews with agency officials, we found that, to assess applicant risk, FWS officials interviewed organizations with which they are less familiar to understand their financial viability and management processes. FWS officials also searched the names of all applicants in the System for Award Management to identify any applicant debarments or suspension. FWS required 24 hours of training for those staff with authority to approve awards, but it required no training for project officers overseeing awards, or reviewing and ranking applications, according to FWS officials. FWS does not have a requirement for a certain number of site visits. However, agency officials told us that site visits are conducted more often for complex and expensive projects. FWS officials also told us that the agency has an on-the-ground presence through 34 field offices that is more extensive than any other Task Force agency. FWS reporting requirements varied by project (see table 14). Based on our analysis of agency documents and interviews with agency officials, we found that NOAA used different oversight processes depending on the type of financial agreement involved; i.e., grants, cooperative agreements, or contracts. To assess applicant risk for grants and cooperative agreements, NOAA officials said that they perform a credit check for organizations applying for funds, check the System for Award Management for exclusions from procurement or nonprocurement activities for those applicants, check the agency’s “do not pay” list for delinquent debts, and they also check for Single Audit Act findings. In addition, NOAA reviews applicants’ past performance. If an organization is deemed high risk, NOAA will impose a special award condition, such as requiring the recipient to submit financial or progress reports more frequently, according to agency officials. The imposed special award condition remains on the award until the recipient demonstrates compliance. For awards that are made competitively, NOAA evaluates applications using criteria set forth in the applicable program regulations and announcement of federal funding opportunity. According to NOAA officials, training for officials who managed grants and cooperative agreements was specific to each of NOAA’s program offices. Within the National Ocean Service, which has responsibility for the five NOAA GLRI projects we reviewed, program officers and grant coordinators were required to complete a certification program, which required completion of a 3-day course on grants and cooperative agreements and annual training on grants. The National Ocean Service also required training on NOAA’s Grants Online system. NOAA did not require site visits for all projects funded through grants and cooperative agreements. According to NOAA officials, the decision to conduct a site visit is based on need and the availability of funds, and high-risk recipients are a priority. Officials noted that, as a matter of standard practice, agency staff conduct site visits and work closely with cooperative agreement recipients for all habitat restoration projects in Areas of Concern. To assess contractor risk, a NOAA team evaluates proposals and assigns a rating, using criteria outlined in the request for proposals for the relevant project. The team considers the past performance of the entities offering proposals and assigns them each one of five possible ratings for past performance. NOAA’s contract management staff are to be certified through the Federal Acquisition Certification Contracting Officer Representative Certification Program, which requires a minimum of 40 hours of training and includes additional training requirements for staff managing contracts valued at more than $150,000. Site visits are not required for NOAA contracts, according to NOAA officials. NOAA program offices may determine the need for site visits based on the type of work funded. NOAA reporting requirements varied by project (see table 15). Based on our analysis of agency documents and interviews with agency officials, we found that NRCS provided most of its GLRI funds through financial assistance contracts to agricultural producers who carry out different conservation practices on their land using NRCS GLRI funding.According to NRCS officials, the agency does not assess applicants’ risk because it cannot deny program funds to a producer based on perceived financial or performance capabilities. Instead, the agency informally assesses applicants’ performance capabilities as part of the conservation planning process and provides technical assistance to producers. NRCS officials told us that the agency conducts training in contract management, usually annually, but did not provide us with documentation of this training. Agency officials said that NRCS conducts site visits several times a year for financial assistance contracts. NRCS also provided GLRI funding through cooperative agreements. According to agency officials, the majority of the agreements are with entities that have previously partnered with the agency, such as state programs or local conservation districts. For new applicants, NRCS officials said that they conduct assessments using Single Audit Act findings, among other things. The officials told us that there is no formal process for reviewing applicants that have worked with the agency before. An NRCS official told us that the agency required annual program management training of its program managers, but did not provide us with documentation of this training. NRCS officials also told us that the agency did not have specific requirements for conducting site visits to projects funded through cooperative agreements, which they said NRCS generally used for capacity building and not for site-specific projects. NRCS reporting requirements varied by project (see table 16). Based on our analysis of agency documents and interviews with agency officials, we found that the Corps primarily used contracts to accomplish its GLRI work. In addition, Corps officials told us that the technical features of their projects were planned and designed by Corps staff, and contracts for projects were awarded using plans and specifications developed by the agency. To assess contractor risk, according to Corps officials, the contractor must provide proof of financial capability to do the work prior to receiving the award. Corps officials told us that contracting officers must undergo training including, but not limited to, 40-hour blocks of quality assurance/quality control classes. The Corps did not perform site visits because Corps officials worked at each project site, and other Corps officials visited the sites on a regular basis (see table 17). We analyzed indirect cost information for the 19 Great Lakes Restoration Initiative (GLRI) projects that we reviewed and compared the amount of GLRI funds expended on indirect costs for each project with the overall amount of GLRI funds that had been expended on the project. To do this, we reviewed the Federal Financial Reports or other information provided by the recipients of GLRI funds that conducted the 19 projects we Indirect costs are those that cannot be identified with a reviewed.program objective. That is, they represent the expenses of doing business that are not readily identified with a particular grant or contract, but are necessary for the general operation of the organization. These include, for example, building utilities and administrative staff salaries. In comparison, direct costs can include salaries, equipment, and travel, among other things, that can be specifically identified with the objective of a particular grant or contract. Table 18 shows the GLRI funds expended on indirect costs by the recipients for the 19 projects we reviewed. In addition to the individual named above, Susan Iott (Assistant Director), Krista Breen Anderson, Cheryl Arvidson, Mark Braza, Peter Del Toro, Armetha Liles, Kimberly McGatlin, Sonia Saini, Jerry Sandau, Jeanette Soares, Kiki Theodoropoulos, and Michelle K. Treistman made significant contributions to this report.
The GLRI seeks to address issues such as water quality contamination and nonnative, or “invasive,” species that threaten the health of the Great Lakes ecosystem. A Task Force of 11 federal agencies, chaired by the EPA Administrator, oversees the GLRI. Task Force agencies conduct work themselves or through agreements with nongovernmental organizations, academic institutions, or other entities. GAO was asked to review how GLRI funds have been used. This report examines the (1) amount of federal funds made available for the GLRI and expended for projects; (2) process the Task Force used to identify GLRI work and funding; and (3) information available about GLRI project activities and results. GAO analyzed funding data for the GLRI and five agencies that received the majority of GLRI funds; GLAS data; accomplishment reports; and 19 GLRI projects selected by funding amounts and agencies to illustrate projects with typical funding amounts. This sample is not generalizable to all projects. Nearly all of the $1.68 billion of federal funds made available for the Great Lakes Restoration Initiative (GLRI) for fiscal years 2010 through 2014 had been allocated as of January 2015. Of the $1.66 billion allocated, the Environmental Protection Agency (EPA) and the other Task Force agencies expended $1.15 billion for 2,123 projects (see fig.). Agencies can liquidate and adjust obligations for 7 years after funds are no longer available for obligation. The Task Force's process to identify each agency's GLRI work and funding has evolved to emphasize interagency discussion. In fiscal year 2012, the Task Force created subgroups to discuss and identify work on three issues, setting aside about $180 million for these issues over 3 years. This included cleaning up severely degraded locations called Areas of Concern, such as the White Lake Area of Concern in Michigan that involved sediment cleanup; preventing invasive species; and reducing nutrient runoff. EPA officials told GAO that the Task Force created additional subgroups to identify all GLRI work and funding beginning in 2015. The Task Force has made some information about GLRI project activities and results available to Congress and the public in three accomplishment reports. In addition, the individual Task Force agencies collect information on activities and results, although this information is not collected and reported by EPA. The conference report accompanying the Department of the Interior Appropriations Act for fiscal year 2010 directed EPA to establish a process to ensure monitoring and reporting on the progress of the GLRI. EPA created the Great Lakes Accountability System (GLAS) to monitor and report on GLRI progress, but some GLAS data are inaccurate, in part, because EPA did not provide clear guidance on entering certain information and GLAS did not have data quality controls. According to EPA officials, the agency replaced GLAS and, in May 2015, began an initial period of data entry into the new system. EPA also provided guidance on entering information into the new system and plans to establish data control activities for ensuring the reliability of the new system. Fully implementing these control activities should ensure that EPA can have confidence that the system can produce data that are accurate and complete. Among other things, GAO recommended in its draft report that EPA determine if it should continue using GLAS or acquire a different system and ensure that the agency develops guidance for entering data and establishes data quality control activities. EPA took action to address these recommendations as GAO completed its work. GAO reviewed the actions taken and determined that the recommendations had been addressed. As a result, GAO removed the recommendations.
IRS designed NRP to obtain new information about taxpayers’ compliance with the tax laws. While IRS is using NRP to measure voluntary filing, reporting, and payment compliance, the majority of NRP efforts are devoted to obtaining accurate voluntary reporting compliance data. In measuring reporting compliance, IRS’s two primary goals are to obtain accurate information but minimize the burden on the approximately 47,000 taxpayers with returns in the NRP sample. IRS plans to use NRP data to update return selection formulas, allow IRS to design prefiling programs that will help taxpayers comply with the tax law, and permit IRS to focus its limited resources on the most significant areas of noncompliance. NRP’s reporting compliance study consists of three major processes: (1) casebuilding—creating information files on returns selected for the NRP sample, (2) classification—using that information to classify the returns according to what, if any, items on the returns cannot be verified without additional information from the taxpayers, and (3) taxpayer audits limited to those items that cannot be independently verified. We reported in June 2002 that NRP’s design, if implemented as planned, is likely to yield the sort of detailed information that IRS needs to measure overall compliance, develop formulas to select likely noncompliant returns for audit, and identify compliance problems for the agency to address. Figure 1 shows NRP’s main elements. IRS designed the casebuilding process to bring together available data to allow the agency to establish the accuracy of information reported by taxpayers on their returns. For each taxpayer with a return in the NRP sample, IRS is compiling internal information, such as past years’ returns and information reported to IRS by third parties, such as employers and banks, and information from outside databases, such as property listings, address listings, and stock sale price data. Classification is where IRS uses the casebuilding information to determine whether an NRP audit is necessary and which items need to be verified through an audit. Classifiers place NRP returns into one of four categories: (1) accepted as filed, (2) accepted with adjustments, (3) correspondence audit, and (4) face-to-face audit. If the casebuilding material allows IRS to verify all of the information that a taxpayer reported on his or her tax return, then the taxpayer will not be contacted and the return will be classified as accepted as filed. On returns where minor adjustments are necessary, the adjustments will be recorded for research purposes, but the taxpayers will not be contacted. These returns will be classified as accepted with adjustments. NRP returns that have one or two items from a specified list requiring examination will be classified for correspondence audits. All other NRP returns for which the casebuilding material does not enable IRS to independently verify the information reported on the returns will be classified for face-to-face audits. NRP audits will take place either through correspondence with the taxpayers or through face-to-face audits. When classifiers determine that an NRP return will be sent for a correspondence audit, IRS will request that the taxpayer send documentation verifying the line items in question. To ensure accurate and consistent data collection, NRP audits will address all issues identified by classifiers and will not be focused only on substantial issues or cases for which there is a reasonable likelihood of collecting unpaid taxes, according to IRS officials. NRP auditors also may expand the scope of the audits to cover items that were not classified initially. IRS plans to conduct detailed, line-by-line audits on 1,683 of the approximately 47,000 returns in the NRP sample in order to assess the accuracy of NRP classification and, if necessary, to adjust NRP results—a process called calibration. One-third of the returns in the calibration sample will be returns that were classified accepted as filed (either with or without adjustments), one-third from those classified for correspondence audits, and one-third from those classified for face-to-face audits. None of the taxpayers with returns in the calibration sample will have been audited or otherwise contacted by IRS prior to the start of these line-by-line audits. To describe IRS’s implementation of NRP, we have conducted frequent meetings with officials in IRS’s NRP Office and other IRS officials as they have implemented the program. We reviewed NRP training materials and observed NRP classifier, correspondence examination, and field examination training sessions. We also observed NRP process tests and conducted site visits to IRS area offices in Baltimore, Maryland; Brooklyn, New York; Oakland, California; Philadelphia, Pennsylvania; and St. Paul, Minnesota, in order to observe and review NRP classification in field offices. We considered whether NRP is being implemented in accordance with its design. In our report issued on June 27, 2002, we found that NRP’s design, if implemented as planned, is likely to provide IRS with the type of information it needs to ensure overall compliance, update workload selection formulas, and discover other compliance problems that the agency needs to address. For this review, we also considered whether IRS was maintaining a focus on meeting NRP’s objectives of obtaining quality research results while, at the same time, minimizing taxpayer burden. This assessment was also based on IRS’s NRP implementation plans. As of the completion of our work, IRS had a significant amount of NRP implementation to carry out. Our evaluation of IRS’s efforts to implement NRP, therefore, only provides an assessment of efforts that have taken place through the time of our work. Additionally, we did not attempt to assess IRS’s efforts to measure filing compliance and payment compliance through NRP. Our evaluation focuses only on IRS’s efforts to obtain voluntary reporting compliance information. A more detailed description of NRP can also be found in our 2002 report. We conducted our work from September 2002 through April 2003 in accordance with generally accepted government auditing standards. In addition to the two tests described in our prior report on NRP, IRS conducted two more tests of NRP processes prior to implementing the program. IRS tested the casebuilding and classification processes in an NRP simulation in July 2002, and conducted another classification process test during the initial classification training session in September 2002. IRS used the preliminary results of both of these tests to estimate NRP classification outcomes and to evaluate the effectiveness of NRP training. As we recommended in our June 2002 report, IRS substantially completed this testing prior to full NRP implementation, though final reports from the tests were not completed until later. In July 2002, IRS used draft NRP training materials to train 16 auditors from IRS field offices in the use of NRP casebuilding materials to carry out the NRP classification process. The newly trained classifiers then classified 506 tax year 2000 returns. NRP staff members reviewed the classifiers’ results and found that, overall, the results of this NRP simulation were positive. They found that the classifiers understood the NRP approach to classification but that there were instances where the classifiers overlooked some of the issues indicated by the casebuilding materials or made other errors. In September 2002, IRS conducted another test of the NRP classification process immediately following the initial training session using final classification training materials. As we recommended in our June 2002 report, IRS had NRP classifiers classify previously audited tax returns in order to compare classifiers’ results with the results of actual audits. Twenty-two newly trained classifiers classified 44 previously audited returns, with each return classified by 5 different classifiers. All of the earlier audits resulted in some changes. NRP staff members then compared the classifiers’ results with those of the other classifiers and with the results of the earlier audits. NRP officials reported that the test showed that about three-fourths of the time the trained NRP classifiers were able to identify issues where noncompliance was found through an audit. IRS used preliminary results of these tests to identify and implement improvements to NRP. For example, NRP staff members noticed early in the course of the second test that NRP classifiers were failing to classify some line items in accordance with NRP guidelines. Trainers reiterated the importance of following the classification guidelines for these items. NRP staff members also saw that the format of the form that classifiers were to use to record their classification decisions made it easy to make mistakes. They revised the form to make decision recording less error-prone. IRS also used these tests to identify the need for more stringent classification review guidelines than initially planned in order to ensure that classifiers understand and follow the classification guidelines. IRS did not finish analysis and documentation of the NRP simulation and assessment and the classification process test until after the beginning of classification in IRS area offices. NRP classification began at IRS area offices during November 2002, but IRS did not finalize its report on the July 2002 NRP simulation until December 2002, and the report on the September 2002 NRP process test was finalized in December 2002. According to NRP officials, this did not create problems because they made changes to NRP processes and training materials before the reports of these tests were final. Though the final reports were not completed until later, these tests and the NRP modifications they generated were complete before full implementation of NRP. IRS identified and trained staff to complete NRP classification and audits. IRS selected NRP classifiers and auditors from field offices across the country to handle NRP cases along with the non-NRP enforcement cases and carried out plans for special training of the staff members tasked with NRP responsibilities. IRS delayed the delivery of computer software training to managers and clerks involved in NRP audits due to technical problems with NRP software. This initially delayed the start of NRP audits, but the training is now complete. The timing of NRP staff selection and training fit the conclusion and recommendation in our June 2002 report that IRS should make sure that these key steps are carried out in the appropriate sequence and not rushed to meet an earlier, self-imposed deadline. IRS selected over 3,000 auditors to handle NRP cases. Most of these auditors are assigned to the Small Business/Self Employed operating division. IRS selected 138 Small Business/Self Employed auditors to be NRP classifiers and about 3,500 to handle NRP face-to-face audits. According to NRP staff members, IRS offices across the country now have one or more auditors trained to handle the NRP cases that come to those offices. IRS area office managers determined how many auditors should receive NRP training based on the projected distribution of NRP returns to their areas. Unlike face-to-face audits, NRP correspondence audits are being handled out of a single office. IRS selected two groups of correspondence auditors—26 correspondence auditors—from the Wage and Investment operating division’s Kansas City office to handle NRP correspondence audits. IRS originally planned to select a cadre of auditors to work only on NRP face-to-face audits. According to NRP officials, the geographic distribution of NRP returns would have made it difficult to have a cadre of auditors dedicated entirely to NRP examinations because they would have had to travel extensively to carry out NRP audits. IRS officials said that even though they did not implement the plan for a dedicated cadre of NRP auditors, the number of full-time equivalent employees needed for NRP— about 1,000 in fiscal year 2003—has not changed. In September 2002, IRS trained 138 auditors to perform NRP classification. The classifiers learned how to apply the guidelines for NRP classification and were shown how to use NRP casebuilding materials. Instructors stressed the concept of “when in doubt, classify the item” meaning that, unless the casebuilding materials explicitly verify the line item in question, the classifier should classify the item as needing to be verified through an audit. Instructors explained that with a random sample such as in NRP, every return represents many others so even small oversights on the part of classifiers or auditors can have a substantial impact on data quality. After the classification training, the classifiers remained at the training location and began classifying NRP returns. Specially trained classification reviewers reviewed most of the classified cases and provided rapid feedback to the newly trained NRP classifiers. The intent of this was to ensure that NRP classifiers understood and consistently applied the NRP classification guidelines and received any needed retraining before returning to their respective field offices and participating in future NRP classification sessions. IRS delivered NRP correspondence and face-to-face auditor training during late 2002 and early 2003. Instructors provided an overview of NRP goals and objectives, reviewed the casebuilding materials that auditors would have at their disposal, and explained the guidelines for NRP audits. IRS trained about 3,500 auditors to conduct NRP face-to-face audits. This training took place in IRS field offices across the country from October 2002 through February 2003. Each face-to-face NRP audit training session lasted 3 days. The training consisted of an overview of NRP goals and objectives, an explanation of how NRP audits differ from traditional enforcement audits, and a description of how to apply NRP guidelines during NRP audits. Trainers stressed that, for the purposes of consistent and accurate data collection, NRP auditors should not focus solely on significant issues or take into consideration the likelihood of collecting unpaid taxes when conducting NRP audits, but should make sure that every item identified by the classifier is carefully verified in the course of the audit. Correspondence auditor training was similarly focused, and the 1-day training took place in September 2002. Staff members were trained before they began to carry out NRP tasks. IRS needed to provide training to NRP auditors and to IRS managers and clerks with NRP responsibilities in order for staff members to understand how to use the computer program IRS developed to capture NRP information. Because of some problems IRS encountered in installing the NRP software in offices across the agency, IRS had to delay training some clerks and managers. This led to delays in starting some NRP audits because managers were unable to assign NRP cases to auditors and clerks were unable to assist in loading NRP cases on NRP auditors’ laptop computers. IRS resolved these problems and finished delivering the majority of this training by the end of January 2003. IRS is nearly finished creating NRP casebuilding files, has classified nearly three-fourths of the NRP returns, and has begun conducting NRP audits. As of the end of March 2003, IRS completed NRP casebuilding for about 94 percent of the approximately 47,000 returns in the NRP sample and about 73 percent of NRP returns have been classified. Also, for 3,651 NRP cases, IRS completed all necessary audit work. Some of these are cases where correspondence or face-to-face audits are finished, but most of the NRP cases closed so far—2,709—are those that did not require audits. Cases involving audits take longer to complete, so few have been closed thus far. IRS made substantial progress in casebuilding and classification starting in 2002, and the number of cases assigned to NRP auditors has been increasing quickly since January 2003. Figure 2 shows the progress IRS has made in casebuilding, classifying, and closing cases. The number of completed NRP casebuilding files began to grow during the second half of 2002, as shown in figure 3. As figure 3 also illustrates, NRP classification began in September 2002. These were the cases classified during sessions held immediately after classifier training. Over 9,000 NRP returns were classified by the end of October 2002. After these sessions, classification became an area office function, with some offices scheduling weeklong classification sessions on a somewhat regular basis and others classifying returns as they come into the office. IRS began conducting some NRP audits during November 2002, though these audits began in earnest during the first quarter of 2003. By the end of January 2003, IRS had assigned over 4,600 NRP cases to auditors to begin conducting face-to-face and correspondence audits. By the end of March 2003, about 18,000 taxpayers had been contacted regarding NRP audits. IRS recognizes the need for accurate NRP data and, as planned, has built into the program several measures to ensure the quality of NRP results. IRS designed the NRP classification process to include quality assurance reviews and has added additional quality assurance measures in response to suggestions we made in the course of this engagement. The NRP audit process also includes quality assurance measures that include both in- process and completed case reviews, with all NRP audits reviewed before they are formally closed with the taxpayer. IRS also built accuracy checks into the data capture steps that take place throughout the NRP process. IRS designed NRP classification to include regular reviews of classifiers’ decisions. We found that these reviews are generally taking place according to NRP guidelines. We also found that additional measures could further improve NRP classification accuracy, and IRS implemented our suggestions. NRP guidelines specify that NRP classification reviewers review all cases for which returns are classified as needing either no audit at all or only correspondence audits to confirm their accuracy. Additionally, reviewers must initially review 25 percent of the cases classified by each auditor that are selected for face-to-face audits until they are satisfied with the quality and consistency with NRP guidelines of the classifier’s work. After that standard has been met, the guidelines specify that reviewers need only review approximately 10 percent of the cases that each classifier selects for face-to-face audit. We conducted site visits to five IRS area offices where NRP classification was taking place and found that IRS’s plans to implement the classification steps of the program were generally well understood by the classifiers carrying them out. Classifiers were knowledgeable about the differences between the NRP classification process and the classification process used in the enforcement audit environment and supported NRP goals in general. However, we also found instances where NRP classifiers were not consistently following NRP classification guidelines. Another issue we identified involved the use of the classification review sheets that reviewers fill out when they find problems with classifiers’ decisions. We learned that there was no provision for further review of these forms. In some cases, we found that reviewers were not always documenting classification errors on the forms. We discussed with NRP officials the potential benefits of using NRP classification review sheets for more than identifying issues at the area office level. Specifically, we suggested that classification review sheets be forwarded from the area offices to a central location in order to identify problems that may be occurring in different locations around the country or other trends that the NRP Office may need to address during the course of NRP classification. The NRP Office agreed with our suggestion and added centralized review of classification review sheets to its other classification quality assurance measures. The NRP Office adopted our suggestion that it conduct site visits to area offices to identify NRP classification implementation issues. Similar to the visits we conducted, NRP staff members visited area offices and met with classifiers, reviewers, and managers to identify issues encountered in carrying out NRP classification and possible areas where NRP guidelines may have been misinterpreted. Among the issues they are asking about is the usefulness of the various materials included in the casebuilding files, information which may prove useful in the design of the casebuilding portion of future iterations of NRP. NRP staff members are also conducting separate reviews of completed classification cases. IRS has designed NRP to include several steps to identify NRP audit quality problems at both the individual auditor level and across the program. Reviews include quality checks while cases are in progress and after work is complete, and reviews by managers at different levels. Importantly, IRS’s plans call for every NRP audit to be reviewed at least once at a point where it is still possible to return to the taxpayer and complete additional audit steps, if necessary. These quality assurance measures will serve to mitigate the risk of IRS including erroneous or incomplete data in the NRP database. NRP guidelines task group managers with reviewing one open NRP audit for each auditor in the first 90 days of that auditor’s NRP activity and another in the first 180 days. NRP officials intend for these in-process reviews to be extensive and timed early enough in the program to identify individual auditors’ misunderstandings of the program, correct them on the audits under review, and prevent them on future NRP audits. IRS has also created Quality Review Teams both to oversee individual audit cases and identify problems at the area office level and systemically across NRP. These teams are made up of IRS managers and are tasked with checking for compliance with NRP-specific and overall IRS standards on 40 open cases and 20 closed cases for each of IRS’s 15 area offices. These reviews will be repeated in each area about once every 3 months throughout the planned 18-month NRP audit period. The IRS standards applied by the teams to the audits they review are the same standards employed by IRS’s Examination Quality Measurement System (EQMS). Similar to the visits NRP officials made to area offices to review classification activities, NRP officials are also visiting area offices to review NRP audit activities. NRP officials said that any systemic issues identified through Quality Review Team reviews will then be addressed across NRP. Another NRP audit quality assurance element calls for all face-to-face audits to be checked by group managers after work is completed but before the cases are formally closed with the taxpayers. This review will include assessing technical correctness, mathematical accuracy, completeness, and adherence to procedural requirements. IRS officials said that these requirements include adherence to the NRP-specific requirement that audits include verification of all items identified through the NRP classification process. These reviews also include assessing adherence to IRS standards in areas such as audit depth and reviewing large, unusual, or questionable items on the audited return. We were initially concerned that IRS planned for these reviews to take place after NRP audits were completely closed, precluding IRS from reopening the cases or otherwise obtaining additional information from the taxpayers even if the reviewers found that the original NRP audits were incomplete. However, senior IRS officials informed us in March 2003 that these reviews will take place after NRP auditors consider their audit work to be complete but before the taxpayers are notified that the audits are over. The officials explained that these reviews of all NRP cases will be timed to provide an important means of ensuring that complete and accurate audit results are entered into the NRP database. They also explained that the importance of NRP audit reviews has been stressed throughout NRP implementation and will be the subject of ongoing communication with managers in the field. It is very important that IRS conduct reviews of NRP audits before they are closed because IRS data show that auditors do not always meet enforcement audit quality standards. In fiscal year 2002, IRS’s EQMS found that auditors in the field did not meet the audit depth standard about 15 percent of the time on field audits; the standard for auditing taxpayer income was not met about 25 percent of the time on field audits; and the standard concerning audits of large, unusual, or questionable items was not met 40 percent of the time on field audits. IRS officials said that accurate audit results in these areas are critical to NRP’s overall accuracy. IRS officials pointed out that the error rate for NRP audits should be lower than in the enforcement audit environment because NRP auditors received special training and because the NRP classification process will enhance NRP audit quality. For example, NRP guidelines call for classifiers to identify large, unusual, or questionable items on returns (the largest EQMS error category) and NRP auditors must address all classified items. However, IRS did not implement its earlier plan of having a selected cadre of auditors work only on NRP cases. While NRP-specific training will serve to prevent many audit errors, NRP audits are now being conducted by a cross section of auditors from IRS field offices across the country and more typical of the auditors who generated the 2002 EQMS error rates. Because every return in the NRP sample represents many returns in the whole population of 1040 filers, even a small number of cases closed with incomplete information could affect the accuracy of NRP data. IRS officials also noted that their plan to conduct early reviews of NRP cases will identify problems with auditors’ understanding of NRP and help to keep them from recurring on subsequent NRP audits. At least two of each NRP auditor’s early cases will have extensive manager involvement while the cases are still in progress, and other managers will be looking at a sample of both completed and open cases to identify problems. IRS officials believe that these measures are sufficient to ensure NRP audit quality. IRS is including a series of data consistency checks in the NRP database to verify that the information NRP auditors record in IRS’s NRP reporting system agrees with the information that IRS recorded from the tax returns earlier in processing. NRP auditors must first record the results of NRP audits in the report-generating software that was modified for NRP purposes. Once auditors have recorded audit results, NRP coordinators must use a data conversion program to transfer the data into a format that the NRP database will accept. Following data conversion, IRS coordinators transfer the audit data to the NRP database. Once the data are transferred to the NRP database, a series of data consistency checks take place to confirm that the data IRS originally transcribed from the tax return are consistent, within specified tolerances, with the data that NRP auditors recorded in the NRP reporting software. If any of the consistency checks fail for a return in the NRP sample, the NRP area coordinator will be notified and the mistake will need to be corrected. According to IRS officials, they will impress upon NRP auditors the importance of entering data into the NRP software correctly the first time because it will be time-consuming to correct errors. NRP officials have developed a case tracking system in order to monitor which cases still need to pass all of the consistency tests and which tests they need to pass. IRS officials reported that, as of early April 2003, the NRP database and related programs were running and that completed NRP cases were being entered into the database. They said that they were still making some enhancements, but that the programs were fully functional. As IRS planned, NRP casebuilding and classification processes are helping minimize the burden on taxpayers with returns in the NRP sample. In addition, the size of the NRP sample is now smaller than IRS expected it to be. However, the number of taxpayers who will be subject to NRP audits has increased. IRS plans to survey taxpayers who receive NRP audits to assess their perceptions of the burden posed by those audits. IRS also used input from tax practitioners to identify ways to improve interactions with taxpayers subject to NRP audits. IRS is following its plans to reduce burden on taxpayers selected as part of the NRP sample by (1) compiling NRP casebuilding materials that allow IRS to verify certain items on tax returns without requesting the information from the taxpayer, (2) classifying returns according to items that need to be verified through an audit, and (3) limiting most NRP audits to items that cannot be verified without an audit. IRS officials also intend to compare classification decisions with the results of NRP audits to identify ways of improving the classification process for future rounds of NRP. Moreover, IRS’s intent in carrying out NRP is to reduce the burden on taxpayers in general by developing better audit selection formulas and reducing the number of audits of fully compliant taxpayers. The NRP casebuilding and classification processes described on page 4 are having their intended effect of reducing the burden NRP creates for taxpayers with returns in the NRP sample. IRS has assembled IRS and third-party data on most of the returns in the NRP sample and classifiers have used these data to verify information on the returns, where possible, without contacting taxpayers. The remaining casebuilding and classification work was under way as of the end of March 2003. The material in the casebuilding files has allowed IRS to fully verify about 10 percent of NRP returns without any audit. Classifiers were able to use the casebuilding material to verify all but one or two items on another 5 percent of NRP returns, and these were sent for correspondence audits. Classifiers identified line items needing verification through a face-to-face audit on about 85 percent of NRP returns classified as of the end of March 2003. Because of the casebuilding and classification processes IRS developed for NRP, these audits will generally be limited to line items that cannot be verified using the information in the casebuilding files. This is a substantial change from earlier compliance research efforts, in which all returns were subject to audits of every line on the return. Only the 1,683 taxpayers with returns selected for NRP calibration audits will be subject to complete audits of their returns. IRS plans to use NRP results to improve future iterations of NRP. For example, NRP officials plan to compare classification outcomes with NRP audit results to help them to identify possible changes needed in casebuilding materials and the NRP classification process. They have told us that it may be possible to further reduce the number of accurately reported line items that are subject to compliance research audits. On the other hand, IRS may also find through NRP calibration audits that classification missed many items that should have been audited, so more line items should receive some form of audit in future rounds of NRP in order for the research results to be useful. IRS also intends to apply lessons learned in NRP classification to classification in the enforcement audit environment. As we noted in our prior report, NRP should also lead to reductions in taxpayer burden in general. IRS plans to use NRP results to help identify and reduce causes of noncompliance and to better target enforcement audits to noncompliant taxpayers, reducing the number of audits of fully compliant taxpayers. IRS projects that, without improved audit selection formulas based on NRP results, the percentage of enforcement audits that result in no tax change will be about 35 percent higher in 2005 than it was in 1993, the first year that selection formulas from the 1988 compliance study were available. Taxpayer burden will decrease if successful execution of NRP enables IRS to reduce the number of these audits of compliant taxpayers. The NRP sample consists of 46,860 tax returns. We reported in June 2002 that the NRP sample would consist of 49,251 returns. The current number is smaller than the initial estimate because IRS originally estimated the NRP sample size based on the characteristics of the filing population that existed during the 1988 reporting compliance study. According to IRS officials, when they applied the NRP sampling plan to the 2001 filing population, the number of returns necessary to satisfy the requirements for some of the NRP strata declined because filing rates for those strata were smaller than IRS officials had projected. The final NRP sample consists of about 2,400 fewer returns than initially planned. IRS officials are currently finding that the NRP classification results are different than initially planned. IRS now estimates that more face-to-face audits will take place than initially projected because (1) as the NRP plan recognized, IRS’s initial estimates were uncertain and based on aging data and (2) the final form of NRP classification guidelines meant more face-to- face and fewer correspondence audits. IRS initially estimated that out of an NRP sample of over 49,000 tax returns, classification would result in about 30,000 face-to-face audits of selected line items, about 9,000 correspondence audits covering no more than two line items, and about 8,000 taxpayers who would not undergo any audit because classifiers were able to either verify all of the items on their returns or could correct some line items without contacting the taxpayers. The final NRP sample is 46,860 returns, and IRS now estimates that NRP classification will result in face-to-face audits of about 39,000 taxpayers, with approximately an additional 2,300 receiving correspondence audits and 3,800 subject to no audit at all. IRS also plans to conduct 1,683 line-by- line calibration audits, drawing 561 returns from each of the three classification categories—these numbers have not changed. Figure 4 shows IRS’s current estimate of how the three NRP classification categories will be distributed. NRP officials explained that the number of face-to-face NRP audits is higher than expected because they were relying on aging data and preliminary classification guidelines. Our 2002 report on NRP also noted the preliminary nature of these estimates. Initial classification breakdown estimates were made using 14-year-old data from the 1988 Taxpayer Compliance Measurement Program study. NRP staff members said that changes in the tax code and in the economic makeup of the filing population since the 1988 study make the returns from that study an unreliable tool for predicting NRP classification results, though that was all they had to work with. They also said that some of the change can be attributed to changes they made in the final form of NRP classification guidelines. NRP staff members said that they modified the NRP classification guidelines as a result of discussions that took place between NRP staff members and representatives from IRS’s business operating divisions. They instituted the changes to the classification guidelines in order to better match the training and skills of the examiners selected to conduct NRP correspondence and face-to-face audits with the types of issues to be covered by those audits. One change is that discrepancies between the casebuilding files and the tax returns for issues such as Individual Retirement Account contributions and Social Security income were removed from the list of issues that could be verified through a correspondence audit. Another change is that the final guidelines call for virtually all business returns to receive face-to-face audits—initial assumptions about the classification process allowed for some business returns to be accepted as filed or receive only correspondence audits. IRS will survey taxpayers who are subject to NRP audits to assess overall customer satisfaction and their perceptions of the burden audits created for them. IRS will ask taxpayers to fill out the same survey it uses to assess customer satisfaction in the enforcement audit environment and compare the results for the two populations. The surveys include issues related to taxpayer burden in the form of questions about the amount of time taxpayers spent preparing for the audits and the amount of time that they spent on the audits themselves. The surveys also ask whether taxpayers receiving NRP audits believe the information that they were asked to provide seemed reasonable and whether they feel they received fair treatment from IRS. After collecting the survey results, IRS will then develop a “score” for each question on the survey that relates to burden. IRS will compare the results from the NRP customer satisfaction survey to the results from surveys completed after enforcement audits. IRS consulted with outside stakeholders to enhance its efforts to minimize the burden NRP created for taxpayers with returns in the sample. IRS consulted with members of organizations that provide feedback to IRS on matters concerning taxpayers, including the National Public Liaison, the Information Reporting Program Advisory Committee, and the Internal Revenue Service Advisory Council. According to IRS, practitioner input led to wording changes on taxpayer notification letters and improvements to training materials, which strengthened the emphasis on maintaining good relations with NRP-selected taxpayers. Representatives of the National Public Liaison also participated in the training for the staff members who were selected to conduct NRP auditor training. IRS continues to be on track for meeting its NRP goal of obtaining meaningful compliance data while minimizing the burden on taxpayers with returns in the NRP sample. IRS has followed the key elements of the plans it laid out last year and has responded to identified needs to modify the program that have come from its own testing as well as from outside stakeholders. Because of this, we are not making any recommendations in this report. We recognize that IRS efforts to gather information about NRP implementation while the program is under way are very important to IRS’s continued success in carrying out NRP. Classification review results, audit review results, and customer satisfaction surveys all provide the means for IRS to make immediate adjustments to NRP now and to enhance the design of future iterations of the program. Provisions for 100 percent review of NRP audits before they are closed are particularly important because even a small number of erroneous or incomplete cases will negatively affect the quality of NRP data. On May 22, 2003, we received written comments on a draft of this report from the Commissioner of Internal Revenue (see app. I). The commissioner noted the importance of NRP and IRS’s continued emphasis on minimizing taxpayer burden and delivering quality results. We also received technical comments from NRP staff members, which we have incorporated into this report where appropriate. As agreed with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after its date. At that time, we will send copies of this report to the Secretary of the Treasury, the Commissioner of Internal Revenue, and other interested parties. This report is also available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staffs have any questions, please contact Ralph Block at (415) 904-2150, David Lewis at (202) 512-7176, or me at (202) 512-9110. Thomas Gilbert was also a key contributor to this assignment.
The Internal Revenue Service (IRS) needs up-to-date information on voluntary compliance in order to assess and improve its programs. IRS's last detailed study of voluntary compliance was done in the late 1980s, so the compliance information IRS is using today is not current. IRS is now carrying out the National Research Program (NRP), through which IRS auditors are reviewing about 47,000 randomly selected tax year 2001 individual tax returns. In June 2002, GAO reported that NRP was necessary, that its design was sound, and that it appeared to meet IRS's goals of acquiring useful compliance data while minimizing burden on taxpayers with returns in the sample. GAO was asked to review IRS's implementation of NRP. GAO reviewed IRS's method of gathering internal and third-party data (casebuilding) and IRS's process of reviewing casebuilding materials to determine if audits are necessary (classification) and assessed IRS's plans to ensure consistent data collection while minimizing burden on taxpayers. IRS's NRP is being implemented as planned and consequently is on track to meet the agency's objectives of obtaining quality research results while minimizing the burden on the approximately 47,000 taxpayers with returns in the NRP sample. IRS officials have completed the development and testing of NRP processes and have selected and trained staff members to carry out the program. Additionally, as the graphic illustrates, IRS is currently nearing the completion of casebuilding and has made progress in classifying NRP returns. Audits, when required, began in November 2002. As of the end of March 2003, IRS had closed 3,651 NRP cases. In accordance with IRS's plans to minimize burden on taxpayers with returns in the NRP sample, some cases have been closed without any taxpayer contact or with only limited audits. The NRP plan recognized that the initial estimates for the overall NRP sample size and the number of returns to be audited were uncertain because they were based on aging data. The overall NRP sample size will be smaller and IRS officials expect to conduct more face-to face audits than initially estimated. As IRS completes NRP casebuilding, classification, and audits, it is implementing quality assurance steps, including efforts to ensure that key audit steps are completed on all NRP audits before they are formally closed with taxpayers. This is important since the data collected from each NRP audit represent information from thousands of similar taxpayers.
OCS planning, integration, and policy roles and responsibilities, along with the associated contractor-management functions,of DOD command and staff, including the following entities: The Under Secretary of Defense for Acquisition, Technology and Logistics has overall responsibility for establishing and publishing policies and procedures governing administrative oversight of defense contracts and for developing and overseeing the implementation of DOD-level OCS policy. Within this office, the Deputy Assistant Secretary of Defense for Program Support is responsible for monitoring and managing the implementation of OCS policy. The OCS Functional Capabilities Integration Board was created in March 2010 and serves as the main forum for the combatant commands, military departments, and defense agencies to address OCS capability issues for support to the joint warfighter, to include assessing and adopting appropriate lessons learned, and solutions affecting future contingency operations. The Joint Staff’s Logistics Directorate (J-4) is the primary staff directorate on the Joint Staff for OCS matters and is responsible for developing OCS planning policy, related procedures, and templates, as well as ensuring that OCS policies and procedures are incorporated in relevant policy documents and doctrinal publications. J-4 created the Operational Contract Support & Services Division to reflect the increased Joint Staff workload related to institutionalizing OCS. The Defense Logistics Agency is responsible for providing worldwide logistics support to the military departments and the combatant commands as well as to other DOD components and federal agencies. It also provides OCS planning, integration, and exercises support through its Joint Contingency Acquisition Support Office (JCASO). The Army, Navy, Marine Corps, and Air Force service component commands plan and execute OCS for their respective forces in accordance with guidance from their respective military departments and combatant commanders. The six geographic combatant commands, which are supported by multiple service component commands, play a key role in determining and synchronizing contracted support requirements and contracting planning, as well as executing OCS oversight. According to Joint Publication 4-10, proper joint-force guidance on common contract support–related matters is imperative for facilitating effective and efficient use of contractors in joint operations. In figure 1, we illustrate the six geographic combatant commands’ areas of responsibility and show the locations of the service component commands that provide support in each of those areas. DOD established the JLLP in 2000 to enhance joint capabilities through knowledge management in peacetime and wartime. The combatant commands and the military services are to use the JLLP to develop lessons learned related to joint capabilities by collecting issues from operations and exercises in order to make improvements to areas such as doctrine, policy, training, and education. For collected issues, according to CJCS Instruction 3150.25E, the combatant commands and the military services are to resolve and integrate them at the lowest organizational level possible, with corrective action taken as close to the issue occurrence as possible. An issue becomes a lesson learned once a DOD entity has implemented corrective action that has contributed to improved performance or that has increased capability at the strategic, operational, or tactical level. According to Chairman of the Joint Chiefs of Staff Instruction 3150.25E, JLLP knowledge management is enabled by the Joint Lessons Learned Information System (JLLIS). As the JLLP’s system of record, JLLIS is to facilitate the collection, management, and sharing of issues and lessons learned to improve the development and readiness of the joint force. An electronic database, JLLIS is supposed to be used to track progress by DOD stakeholders and other organizations involved in the collection of issues. Additionally, if an issue is resolved and determined to be a lesson learned, then it is to be published and shared using JLLIS for proper institutionalization and learning to improve the operational effectiveness of the joint force. According to Chairman of the Joint Chiefs of Staff Instruction 3150.25E, organizations participating in the JLLP shall collaboratively exchange information (including issues and lessons learned) to the maximum extent possible. The services have also established service-specific lessons-learned programs and processes that include the collecting, integration, and sharing of lessons learned in support of the JLLP. For example, the Air Force Lessons Learned Program allows Airmen from all functional areas to share their observations to help shape how the Air Force prepares for and executes future operations. The Air Force Lessons Learned Program is a “push-pull” process where members of the lessons-learned offices coordinate with functional subject-matter experts to “pull” data and information by conducting interviews and after-action reviews, issuing flash bulletins, and generating formal issues identified. Reports are loaded into JLLIS so that Airmen can track progress and share knowledge. DOD’s geographic combatant commands have used large-scale operations as sources for collecting OCS issues. For example, in 2012, the U.S. Central Command, with support from JCASO, conducted interviews to collect issues experienced in OCS activities during Operation Iraqi Freedom / Operation New Dawn and made 24 recommendations within three general areas: contractor management, contract closeout, and transition planning. For instance, in the area of transition planning, the command found that OCS planning in Iraq was not fully integrated into the overall joint task force drawdown or transition plans. Therefore, the command recommended that OCS planners be sourced and embedded with the operations directorate 2 years prior to a transition so that contract support requirements between DOD and the Department of State could be properly identified. In Afghanistan, senior U.S. Central Command and U.S. Forces Afghanistan officials provided observations and insights about the OCS Integration Cell (formerly the OCS Drawdown Cell) and reported having several issues that could be used to inform contractor management and OCS planning. For example, the officials reported the need to change the ad hoc organization of the cell, reduce overlap and confusion of duties and responsibilities, and better integrate contracting support into the planning process. In Afghanistan, officials recommended changing the OCS Integration Cell’s organization and physical location, codifying in doctrine the OCS Integration Cell authorities prior to a contingency, improving sharing of OCS-related information with stakeholders, and combining OCS and operational-contracting functions to improve OCS planning and execution, among other things. Smaller-scale operations have also provided the geographic combatant commands with opportunities to collect OCS issues that affect their command. For example, U.S. Pacific Command officials observed OCS issues during Operation Tomodachi in 2011, following the earthquake and tsunami near Japan. Specifically, an observation at U.S. Pacific Command recommended that the command establish a Joint Theater Support Contracting Command to coordinate contracting during the disaster, which subsequently led the command to develop an instruction that includes considerations and procedures for establishing a Joint Theater Support Contracting Command and to hold a rehearsal-of- concept drill. In another smaller-scale operation, Operation Unified Response in Haiti, U.S. Southern Command identified several OCS issues. Specifically, the command identified the need to improve its Synchronized Predeployment and Operational Tracker policy, develop more OCS capabilities at the military service component commands, and establish operational frameworks to enable cross-service OCS collaboration within the context of theater security cooperation efforts. Additionally, the geographic combatant commands are improving efforts to collect OCS issues during exercises. The geographic combatant commands are to use DOD’s Joint Training System in planning, executing, and assessing joint training, like exercises. The Joint Training System provides an integrated, requirements-based method for aligning joint training programs with assigned missions consistent with command priorities, capabilities, and available resources. We have previously reported that evaluating lessons learned and identifying issues for corrective actions are fundamental components of DOD’s training and exercise process. We recommended that DOD develop guidance with specific criteria for postexercise documentation, particularly to allow the results and lessons learned from exercises to be easily reviewed and compared. DOD agreed that such information should be provided in a standardized format that can be easily accessed and understood by authorized organizations that might benefit from such knowledge. However, DOD cautioned that any actions taken in response to this recommendation must accommodate constraints regarding classified information. As of December 2014, four of the six geographic combatant commands—U.S. Africa Command, U.S. Central Command, U.S. Northern Command, and U.S. Southern Command—have identified OCS as a critical capability in their joint training plans and have integrated it into the planning, execution, and assessment of training events. For example, U.S. Southern Command has identified conducting OCS as a critical capability and developed an associated supporting task, which it integrates into its exercises like PANAMAX and Integrated Advance. In the past year, U.S. European Command has identified OCS as a critical capability in its joint training plans, but it has not yet completed a full cycle of planning, executing, and assessing training events that include OCS as a critical capability. The command expects to complete the other phases of the cycle following its forthcoming exercises. Prior to the inclusion of OCS as a critical capability in its joint training plans, the command included prescripted OCS-related events or master-scenario events intended to guide exercises toward specific outcomes. According to U.S. European Command officials, they have included OCS-related master-scenario events as part of their exercises since 2008. While a training proficiency assessment of these events is not typically performed by the geographic combatant commands, master-scenario events can provide command staff the opportunity to perform some OCS-related tasks and familiarize themselves with OCS processes, among other things. U.S. Pacific Command has not identified OCS as a critical capability in the earliest phase of the Joint Training System, which informs later phases like planning, execution, and assessment of OCS. However, the command plans to progressively increase OCS play through training objectives and master-scenario events in forthcoming exercises—such as the OCS Joint Exercise-15 and Talisman Saber in 2015—to improve OCS issue-collection efforts. With the exception of the Army, the military services and their component commands are not generally collecting OCS issues needed to develop lessons learned. Chairman of the Joint Chiefs of Staff Instruction 3150.25E requires the services to conduct a service lessons-learned program that includes active and passive collection. Furthermore, guidance from the military departments and services, such as Army Regulation 11-33, Air Force Instruction 90-1601, Office of the Chief of Naval Operations Instruction 3500.37C, and Marine Corps Order 3504.1, establish lessons-learned programs, procedures, and responsibilities, including for the collection of lessons learned. The Army collects OCS issues through its dedicated OCS organizations, active collection tools, training, and comprehensive service-wide OCS guidance. For example, the Army established the Acquisition, Logistics and Technology-Integration Office, which is dedicated to leading the development and integration of OCS across the Army and the Army’s OCS Lessons Learned Program. Additionally, the Army’s Acquisition, Logistics and Technology-Integration Office has fully integrated with the Combined Arms Support Command’s Reverse Collection and Analysis Team Forum, which collects OCS issues from senior unit leaders returning from a deployed operation. The program includes live after- action reviews, commander interviews, and an OCS roundtable discussion with the commander and staff, all of which work as issue- collection tools. In response to an OCS issue identified through its lessons-learned program, the Army also developed and instituted a 10- day optional OCS course to prepare military and civilian personnel to develop acquisition-ready requirements and manage a unit’s overall contract support responsibilities. Graduates receive the Army’s 3C additional skill identifier. According to Army officials, the Army’s training has also improved its service members’ overall understanding of the importance of OCS to mission success. In addition to training, the Army has developed comprehensive service-wide OCS guidance—such as the OCS Tactics, Techniques, and Procedures manual and several OCS- related handbooks—to provide tactical, service-specific details to its staff. According to the Army, the intent of this OCS Tactics, Techniques, and Procedures manual is to assist commanders in correctly implementing OCS in the areas of planning, integrating, managing, and synchronizing OCS. The manual outlines key OCS terms, the Army’s OCS structure, organizational initiatives, planning, execution, and contractor management. While the Army has organization, tools, training, and guidance for collecting OCS issues, its service component commands collect OCS issues to varying degrees. For example, Army Northern Command officials stated that they collected several OCS issues after participating in the Joint Staff’s OCS exercise hosted by U.S. Northern Command in January 2014. At U.S. Army South, the command provides its OCS issues through U.S. Southern Command’s after-action review process, but does not enter them into the lessons-learned system of record. However, other Army component commands such as U.S. Army Europe and U.S. Army Pacific have not collected OCS issues of their own. For example, U.S. Army Europe relies on the 409th Contracting Support Brigade to gather OCS issues, but these lessons have been primarily contracting-related. As discussed later in the report, Army service component commands are collecting OCS issues to varying degrees because of a lack of awareness of OCS roles and responsibilities. In contrast to the Army, the Navy, Marine Corps, and Air Force—and their component commands—are generally not collecting OCS issues. For example, officials from the Navy and all the component commands we interviewed stated that they are not collecting OCS issues or are doing so to a limited degree. Furthermore, the Marine Corps does not systematically collect OCS issues. For example, at Marine Corps headquarters, the official responsible for OCS told us that the Marine Corps does not systematically collect OCS issues, but that sometimes he receives e-mails with OCS issues that he tries to resolve based on experiences he gathered while deployed in Afghanistan. Additionally, three of the six Marine Corps component commands, including Marine Corps Forces Central Command, have identified some OCS-related issues from exercises like Eager Lion in Jordan. six Air Force component commands we interviewed provided us with examples of contracting issues; however, according to officials, few if any noncontracting OCS issues had been collected. Exercise Eager Lion is a multilateral exercise held annually since 2011 where coalition forces conduct a live-fire, counterattack operation at a range near Jebel Petra, Jordan. In 2014, coalition forces included ground forces from the U.S. Marine Corps, Jordanian Armed Forces, and the United Kingdom and aviation units from the Kingdom of Jordan, the Republic of Turkey, the Kingdom of Saudi Arabia, and the United States. part of OCS. According to these officials, this decision was made because of a lack of awareness of OCS issues. Furthermore, U.S. Army South officials noted that part of the challenge of collecting OCS issues from exercises comes from a lack of understanding of OCS. In June 2014, a DOD task force on contractor logistics in support of contingency operations found that strategic leadership across the department did not recognize OCS as a critical component of combat readiness. One reason for the general lack of awareness of OCS issues stems from not having DOD-level guidance that establishes military service and component command roles and responsibilities regarding collection of OCS issues. While existing lessons-learned guidance—like Chairman of the Joint Chiefs of Staff Instruction 3150.25E—identifies the importance of enhancing capabilities by collecting issues in broad terms, it does not list any specific capabilities such as OCS. Additionally, while DOD has issued guidance through part 158 of Title 32 of the Code of Federal Regulations and DOD Instruction 3020.41, which identify the roles and responsibilities of various OCS stakeholders—including lessons-learned responsibilities in the case of JCASO and the Director of Defense Procurement and Acquisition Policy—they do not clearly identify roles and responsibilities for the military services and service component commands to collect OCS issues. Furthermore, according to Joint Publication 4-10, the military departments, among other things, are responsible for integrating OCS into training, exercise, and lessons- learned programs; however, the publication does not specifically identify collection of OCS issues in its discussion of military service or service component command roles and responsibilities, as it did in its previous version. According to Standards for Internal Control in the Federal Government, a good internal-control environment requires that the agency’s organizational structure clearly define key areas of authority and responsibility and establish appropriate lines of reporting.revises its existing guidance to specifically establish and detail the roles and responsibilities of the services in collecting OCS issues, it will lack reasonable assurance that the services and their component commands recognize the importance of OCS. Additionally, the Navy, Marine Corps, and Air Force do not have service- specific OCS guidance that establishes and outlines their roles and responsibilities for the collection of OCS issues, which according to officials from these services contributes to the general lack of OCS awareness. As previously discussed, in addition to the JLLP, the services also have established service-specific lessons-learned programs and processes in support of the JLLP. However, according to officials from Air Force and Marine Corps component commands in Europe and Africa, the lack of service-specific guidance on OCS affects the commands’ understanding of OCS roles and responsibilities, to include their collection of OCS issues as part of lessons-learned processes. In February 2013, based on our finding that the Navy, Marine Corps, and Air Force lacked comprehensive OCS guidance, we recommended that they develop guidance, which would include the requirement to plan for OCS. DOD concurred with our recommendation and has tasked the military services with issuing OCS guidance by the second quarter of fiscal year 2015. The Army developed OCS guidance in 2010 and 2011. Additionally, the Marine Corps has developed draft OCS guidance that is in the review process and is expected to be issued in the spring 2015. According to Navy officials, the Navy expects to issue its OCS guidance by the first quarter of fiscal year 2016. However, Air Force officials have not indicated whether the Air Force will meet DOD’s deadline as it continues to work to identify a lead to integrate and synchronize OCS issues. While we continue to believe that comprehensive service-wide guidance for these services is needed to further the integration of OCS into all of the services’ planning, our prior recommendation did not address the issue of the services’ roles and responsibilities for the collection of OCS issues as part of a lessons-learned process. Furthermore, according to DOD officials, it is unclear whether future OCS service-wide guidance from the Navy, Marine Corps, and Air Force will include roles and responsibilities for the collection of OCS issues. By not including the services’ roles and responsibilities to collect OCS issues in comprehensive service-specific guidance, the services and the service component commands may not fully understand the importance of their roles in collecting OCS issues as part of their specific service’s lessons- learned processes. As a result, commanders may be unable to build on efficiencies that their services have identified by collecting OCS issues and may be unable to adequately plan for the use of contractor support. According to DOD officials, another reason that the military services and their component commands lack awareness of OCS and therefore the importance of collecting OCS issues is that senior service members—that is, commanders and senior leaders—do not have an OCS training requirement. According to DOD’s Joint Concept for OCS, developing a skilled cadre of multidisciplinary military and civilian personnel with specialized OCS training and experience is one part of a holistic solution required to achieve a cultural change to integrate OCS throughout institutional and operational processes. However, according to senior service officials there is a lack of awareness of OCS at the leadership level within their services or component commands, which can be attributed to inadequate OCS training. According to these officials, OCS training can help improve commanders’ and senior leaders’ awareness of OCS issues. For example, service members who attended the Joint Staff’s Joint OCS Planning and Execution Course generally praised it, noting that prior to attending the course they had a limited understanding of OCS issues. However, while the Joint Staff offers the Joint OCS Planning and Execution Course as an opportunity to educate senior service members from the geographic combatant commands, military services, and service component command on OCS, according to senior Joint Staff officials, the course attendance of senior service members outside of the logistics functional area has been limited. According to a senior Joint Staff official, the initial approach for the Joint OCS Planning and Execution Course was to reach a broad audience as well as to provide OCS training to those that needed it the most. However, the official added that DOD needs to find a more-permanent training solution for OCS. The department also offers several online courses about OCS, but they are also electives, and none of the services has an OCS training requirement to take any of these existing courses. Several officials we interviewed from across the services cited the need for OCS training to improve awareness of OCS throughout their services. Additionally, senior officials from the Joint Staff and Office of the Deputy Assistant Secretary of Defense (Program Support) stated that an OCS training requirement would help the services address their lack of awareness of OCS issues. The National Defense Authorization Act for Fiscal Year 2013 recently added OCS to the list of subject matter to be covered by joint professional military education, which consists of the instruction and examination of officers in an environment designed to promote a theoretical and practical in-depth understanding of joint matters and, specifically, the subject Without an OCS training requirement, commanders and matter covered.senior leaders at the military services and component commands may not be fully aware of OCS and its importance to the success of the warfighting mission. Furthermore, without this awareness of OCS’s importance, senior service members may not properly prioritize the collection of OCS issues. DOD has made progress integrating some changes resulting from lessons learned in OCS in doctrine, policy, and training, but these have largely come as a result of OCS issues raised outside of the JLLP. For example, in July 2014, DOD published a new version of Joint Publication 4-10, Operational Contract Support, that provides updated doctrine for the planning, conducting, and managing of OCS in joint operations. It also provides guidance on matters such as OCS organization command and control. For example, the new version of Joint Publication 4-10 recommends the establishment of a permanent OCS integration cell at each geographic combatant command to perform contract-support integration and to provide oversight of any subordinate joint force command contract-integration cell when formed. The development of this concept, according to Joint Publication 4-10, was a direct outgrowth from experiences in Afghanistan. DOD has also made progress in integrating changes in OCS through its revision of DOD Instruction 3020.41 and issuance of corresponding regulations in the Code of Federal Regulations.assign responsibilities, and provide procedures for OCS, including OCS The instruction and regulations establish policy, program management, contract-support integration, and integration of defense contractor personnel into contingency operations outside the United States. According to DOD documentation, initial lessons from Operation Iraqi Freedom and Operation Enduring Freedom provided impetus for developing a DOD policy for managing contractor personnel in support of contingency operations. Additionally, the Joint Staff (J-4) developed a 10-day Joint OCS Planning and Execution Course for officers with support from Joint Staff (J-7), the Army’s Acquisition, Logistics, Technology and Integration Office, and the Defense Acquisition University. According to DOD officials, the course was developed to fill the training gap in joint OCS planning and execution of OCS planners at the geographic combatant commands, sub–joint force command and service-component levels to plan and execute OCS across the range of military operations. The joint course, which is targeted at officers, senior noncommissioned officers, and government civilians focuses primarily on operational-level OCS staff responsibilities and tasks during military operations. Moreover, based on information from a Reverse Collection and Analysis Team, the Army identified that it lacked personnel that could provide primarily tactical-level OCS capabilities for units. In response, the Army established an additional skill identifier for OCS and developed an OCS course to train and prepare designated soldiers on how to prepare acquisition-ready requirements and manage a unit’s overall contract-support responsibilities. Sources of information outside of the JLLP have generally proved more significant in shaping changes in OCS. For example, according to a December 2013 DOD report on OCS Lessons Learned, the Secretary of the Army–directed Commission on Army Acquisition and Program Management in Expeditionary Operations (otherwise known as the Gansler Commission), the Commission on Wartime Contracting, and various GAO reports have proved more relevant than DOD’s lessons- learned program in effecting changes in doctrine, policy, training, and education among other areas. Further, according to the DOD report, legislation and congressional focus and oversight provided additional urgency and visibility to OCS lessons learned, garnering the attention and focus of senior DOD leaders to institute improvements. For example, according to the department’s report on OCS lessons learned, the Joint OCS Planning and Execution Course addresses a provision in the National Defense Authorization Act for Fiscal Year 2008. DOD also stated in the report that many of the OCS lessons learned identified from the JLLP are too tactically focused to help shape needed changes. The extent to which DOD can integrate OCS issues from the JLLP is limited because the department does not have a focal point for OCS lessons learned. As noted by DOD’s OCS Joint Concept, there are multiple organizations across the department that are working on separate, and sometimes disjointed, OCS lessons-learned efforts. Without a lead for lessons learned, as stated in the document, the department will continue to develop OCS capabilities in a haphazard and inefficient manner. According to the Center for Army Lessons Learned handbook, which serves a guide for establishing a lessons-learned capability, the successful resolution and integration of lessons learned If unit commanders requires executive-level support or involvement.have the capability to correct an issue internally, according to the handbook, they should do that. However, according to the handbook, there will be issues that rise to the next level of attention that an organization is unable to correct internally. Further, the Center for Army Lessons Learned handbook emphasizes that without senior-level leadership involvement, with the authority to task agencies to work issues and reallocate resources, the lessons-learned process will fail. Moreover, according to Standards for Internal Control in the Federal Government, a good internal-control environment requires that the agency’s organizational structure clearly define key areas of authority and responsibility and establish appropriate lines of reporting. In December 2006, we found that there was no organization within DOD responsible for developing procedures to systematically collect and share its institutional knowledge regarding the use of contractors to support deployed forces. point to, among other things, lead and coordinate the development of a department-wide lessons-learned program to capture the experiences of units deployed to locations with OCS. DOD concurred with our recommendation, and stated that DOD would develop and implement a systematic strategy for capturing, retaining, and applying lessons learned on the use of contractor support to deployed forces. Additionally, DOD subsequently stated in response to this recommendation that JCASO would be deemed responsible for collecting OCS lessons learned. While the department, as of December 2014, has not developed a systematic strategy for capturing, retaining, and applying OCS lessons learned, it has assigned JCASO responsibility for collecting joint operations–focused OCS lessons learned and best practices from contingency operations and exercises in order to inform OCS policy and recommend solutions in doctrine and training, among other areas, in cooperation with the services and other DOD components.senior DOD officials, JCASO does not serve as a focal point for integrating OCS issues from the JLLP, but rather informs policy and recommends solutions on joint OCS issues. JCASO was tasked in the DOD OCS Action Plan to provide an assessment of OCS lessons learned to the OCS Functional Capabilities Integration Board and to lead efforts to identify enterprise-wide solutions to incorporate them, such as in doctrine, policy, and procedures. However, according to officials, the assessment will be completed by the end of 2016. GAO-07-145. meet emerging OCS requirements. JCASO’s responsibilities regarding lessons learned will be included in this assessment. According to senior DOD officials, they expect to complete this review by the end of fiscal year 2015. In the past few years, there has been significant support within the department for an OCS joint proponent with lessons-learned responsibilities. Joint Publication 1-02 defines a joint proponent as a service, combatant command, or Joint Staff directorate assigned coordinating authority to lead the collaborative development and integration of joint capability with specific responsibilities designated by the Secretary of Defense. The October 2013 OCS Joint Concept outlined a plan to designate a proponent that would, among things, manage the OCS lessons-learned process to ensure the latest lessons and best practices from the field are recorded to ensure that capability requirements and content across DOD’s institutional processes are consistent. According to the plan, this proponent would establish and maintain the OCS joint lessons-learned process to collect, catalog, and validate observations, insights, and lessons from operations and exercises. Furthermore, the services, geographic combatant commands, and the combat service agencies would work collaboratively with the proponent to ensure issues and lessons learned are entered into the process. Additionally, a June 2014 report from the Defense Science Board also recommended that the department establish a 3-star- equivalent, director-level proponent that would coordinate OCS efforts across the Office of the Secretary of Defense, the Joint Staff, the military departments, and the defense agencies, and support efforts to resource critical OCS-related requirements across these organizations. The report recommended that this proponent oversee the creation of a visible and transparent knowledge-management system for OCS that links planning, requirements, contracting, and audit functions. Several officials we spoke with expressed support for an OCS joint proponent with lessons-learned responsibilities. As Joint Staff (J-7) officials explained, having a joint proponent is essential to integrating issues in cross-capabilities such as OCS because it allows OCS stakeholders, for example, to better advocate for additional resources during high-level DOD processes such as the Joint Capabilities Integration and Development System. Additionally, Joint Staff (J-4) officials noted that the OCS lessons-learned process has many owners and lacks a singular point of focus. They added that OCS in general can become compartmentalized among the defense agencies, military services, and combatant commands. However, according to officials, since the community is relatively small currently, they prefer to talk and share relevant information informally rather than through the JLLP process. Army lessons-learned officials stated that the OCS lessons- learned community is disjointed and lacks synchronization, and stated that a joint proponent with lessons-learned responsibilities is the next logical step in institutionalizing OCS in the department. However, Army officials cautioned that DOD should be careful in selecting a joint proponent, as it must be properly situated in the department and staffed with personnel with diverse and relevant expertise. DOD has undertaken initial efforts to identify and assign an OCS joint proponent that will include lessons-learned responsibilities. According to officials, as of December 2014, the Joint Staff (J-4) is leading a feasibility assessment with the Functional Capabilities Integration Board for an OCS Joint Proponent. The assessment team plans to issue its findings to the Functional Capabilities Integration Board in February 2015. According to officials, they have agreed to recommend a single OCS joint proponent to handle multiple areas such as training, personnel, materiel, as well as lessons learned. However, according to officials, they have not determined specific roles and responsibilities, such as whether the joint proponent would be responsible for providing formal oversight for integrating OCS issues from the JLLP. As DOD develops its concept for an OCS joint proponent, by establishing such roles and responsibilities, DOD could help ensure that it has a systematic strategy for capturing, retaining, and applying lessons learned on the use of OCS, to include integrating issues from the JLLP. Including such roles and responsibilities in the concept for the OCS joint proponent will help better position DOD to integrate all OCS issues identified from the JLLP, thereby addressing any key OCS gaps and shortfalls in its efforts. The geographic combatant commands and Army use JLLIS to varying degrees to share OCS lessons learned department-wide. Chairman of the Joint Chiefs of Staff Instruction 3150.25E states that the Joint Staff Joint Directorates and combatant commands shall share joint issues in the JLLP, and the military services shall share information across the joint force in support of the JLLP. According to Chairman of the Joint Chiefs of Staff Instruction 3150.25E, JLLIS is the department’s system of record for the JLLP, the primary means of dissemination of lessons learned, and facilitates the collection, tracking, management, sharing, collaborative resolution, and dissemination of lessons learned to improve the development and readiness of the joint force. Furthermore, Standards for Internal Control in the Federal Government indicate that all transactions and other significant events should be clearly documented and the documentation readily available. We found that all of the geographic combatant commands enter OCS issues into JLLIS, but do not use the system to track the progress of collection and resolution efforts. For example, U.S. Pacific Command officials entered issues from Operation Tomodachi into JLLIS, such as the need for a Joint Theater Support Contracting Command. In August 2012, U.S. Pacific Command officials used this issue and others to inform the development of U.S. Pacific Command Instruction 0601.7 on OCS, which included planning considerations and procedures for establishing a Joint Theater Support Contracting Command, and a month later, cohosted a rehearsal-of-concept drill with the Joint Staff (J-4). The objectives of the rehearsal-of-concept drill included testing and adjusting tactics, techniques, and procedures developed by the Joint Staff (J-4) and the methodology for establishing and manning a Joint Theater Support Contracting Command. However, U.S. Pacific Command officials did not use the system to track these resolution activities. As a result, U.S. Pacific Command’s resolution involving the development and issuance of U.S. Pacific Command Instruction 0601.7 was neither entered into JLLIS nor shared through JLLIS so that other geographic combatant commands encountering similar challenges could view it. Similarly, U.S. Southern Command entered issues collected from exercises and operations into JLLIS, but used processes outside of JLLIS to resolve OCS issues. As the JLLP’s system of record, JLLIS facilitates the collaborative resolution of lessons learned to improve the development and readiness of the joint force. For instance, the lessons-learned process provides DOD organizations with a joint lesson memorandum, a tool that may be used by organizations’ leadership to inform the Joint Staff of lessons requiring their analysis and resolution. However, U.S. Southern Command officials stated that they have used means such as the Program Budget Review process, or even phone calls to communicate OCS issues and shortfalls to the Joint Staff, but that no issues have been forwarded to the Joint Staff through the formal issue-resolution process. The geographic combatant commands also enter OCS issues into JLLIS at different rates. According to CJCS Instruction 3150.25E, combatant commands are to collect and share key, overarching, and crosscutting issues using JLLIS no later than 45 days after the end of an exercise, in order to facilitate the timely sharing of issues from combatant-command exercises.geographic commands enter OCS issues into JLLIS varies. For instance, U.S. European Command officials stated that they use JLLIS as a repository to store OCS issues until they can be reviewed for possible resolution efforts. Other commands enter OCS issues into JLLIS after a resolution has been validated. Officials from U.S. Northern Command, U.S. European Command, and U.S. Africa Command stated that they have internal processes for collecting and resolving OCS issues prior to submission into JLLIS. According to an official with U.S. European Command, this process could take a year or more. However, according to DOD officials, the rate at which On the other hand, the Army developed an OCS concept to synchronize efforts on OCS lessons learned that included utilizing the Army Lessons Learned Information System, the Army’s portal to JLLIS to share issues and lessons learned. The Army’s Acquisition, Logistics, Technology, and Integration Office, which leads the development and integration of OCS across the Army and the Army’s OCS lessons learned program, recognized that OCS issues collected from after-action reports, reverse collection and analysis action teams, and the Center for Army Lessons Learned resided in multiple repositories and were not shared throughout the Army. As a result, the Army’s Acquisition, Logistics and Technology- Integration Office developed and currently administers an OCS lessons- learned portal on the Army Lessons Learned Information System to create a primary system to input OCS issues and ensure that OCS lessons learned are shared within the system. However, the Navy, Marine Corps, and Air Force are not generally collecting OCS issues, and therefore, not generally sharing OCS issues in JLLIS. For example, while Air Force officials provided us with examples of contracting issues they collected, they reported that few, if any, noncontracting OCS issues had been collected. Those DOD organizations that collect and resolve OCS issues and lessons learned generally rely on forums and systems outside of DOD’s lessons-learned program to share OCS issues and lessons learned. We found that five of the six geographic combatant commands rely on OCS- related boards and working groups to share OCS lessons learned within their geographic combatant commands and respective service component commands. For example, officials from U.S. Africa Command and U.S. Northern Command reported that they share OCS lessons learned during meetings of their respective OCS Working Groups and Commanders Logistics Procurement Support Boards; however, U.S. Northern Command officials clarified that meeting minutes were the only way to record lessons learned discussed during their meetings of the Commanders Logistics Procurement Support Board. U.S. Central Command officials stated that they rely exclusively on personal relationships, e-mails, and telephone calls to share OCS lessons learned. However, by using forums and methods outside of JLLIS to share OCS issues and lessons learned, such as meeting minutes and telephone calls, DOD runs the risk of not being able to systematically track, resolve, and share OCS issues department-wide, which could negatively affect joint force development and readiness. The geographic combatant commands and service component commands also store and share OCS lessons learned on local SharePoint portals, which limit information sharing to the other geographic combatant commands and service component commands. For example, U.S. European Command stores and shares OCS issues and lessons learned on its classified SharePoint portal. In another instance, U.S. Pacific Air Forces does not input any lessons learned into JLLIS; instead, it houses lessons learned on a classified community of practice on U.S. Pacific Air Forces’ SharePoint portal, which as of July 2014, was not active or available to users. Two out of six Army service component commands—U.S. Army Europe and U.S. Army Pacific—also store lessons learned on their local SharePoint portals. Officials with U.S. Army Europe reported that they occasionally share issues and lessons learned with the European Contracting Coordination Board. Meanwhile, officials with U.S. Army Pacific stated that sharing issues and lessons learned throughout U.S. Pacific Command or other geographic combatant commands can prove difficult since they store their lessons on local SharePoint portals, which exist behind firewalls. DOD is generally not sharing OCS lessons learned in JLLIS because the system is not functional for users searching OCS issues. JLLIS’s limited functionality for OCS issues is due to (1) its inadequate search features, (2) not having an OCS label in JLLIS, and (3) the lack of a central location for sharing information about OCS issues and lessons learned within JLLIS. According to the Joint Staff (J-7), which serves as the office of primary responsibility for JLLIS, the system’s search features pose significant challenges to retrieving information for civilian and military users without expertise or experience with JLLIS. Officials with the Joint Staff (J-7) stated that users who regularly utilize the system, such as doctrine writers, know how to mine the system for pertinent information, but JLLIS’s search features can be difficult to use for infrequent users of the system. Furthermore, officials at three of the six commands—U.S. Africa Command, U.S. European Command, and U.S. Southern Command—reported that it is difficult to research OCS issues and lessons learned due to JLLIS’s poor search functionality. In addition to JLLIS’s limited search features, JLLIS does not have a label for OCS issues and lessons learned. When users enter issues and lessons learned into JLLIS, the system allows users to label information as pertaining to a certain topic, which improves their ability to later search for issues and lessons learned related to that topic. For example, JLLIS has a label for Sustainment issues and lessons learned. JLLIS users researching issues and lessons learned on Sustainment can search for that label, and the system will return information related to Sustainment. However, there is no label for OCS in JLLIS. Officials at three of the six commands—U.S. Africa Command, U.S. European Command, and U.S. Southern Command—reported that it is difficult to research OCS issues and lessons learned because JLLIS does not have a label for OCS. In the absence of an OCS label, officials at U.S. Southern Command noted that they use related functional areas and joint mission-essential tasks to label OCS issues and lessons learned to improve their ability to find relevant information; however, this process does not ensure that those issues and lessons learned will be properly categorized as OCS. Joint Staff (J-4) officials stated that there is little chance that OCS issues or lessons learned in JLLIS will be useful or communicated to a broader OCS audience if they are labeled incorrectly or do not specifically refer to OCS. In July 2014, we visited the Joint Staff (J-7) for a demonstration of JLLIS. During that meeting, JLLIS allowed users viewing information to search only by a single word. For example, when tested during the demonstration, “operational contract support” was an invalid search term because it contained a phrase with multiple words. On the other hand, “OCS” was a valid search term because it contained only a single word. When tested during the demonstration, the search for “OCS” yielded 2,191 results. However, these results included information regarding “officer candidate school” and “joint operation command systems”—other topics that also include the letters OCS. Without an OCS label, we were unable to narrow the search results to information pertaining only to OCS. Office of the Under Secretary of Defense for Acquisition, Technology and Logistics, Report on Contingency Contracting and Operational Contract Support Lessons Learned (Dec. 20, 2013), p. 19. repositories, which limits the sharing of OCS information department- wide. According to officials with the Joint Staff (J-7), they have received feedback from several JLLIS users reporting that the search features are not user friendly. The officials stated that JLLIS’s search features can make the system difficult for the average user to utilize. As a result, they informed us that they have made improving the search features in JLLIS a priority. During our visits in July 2014, the Joint Staff (J-7) was in the process of acquiring software to upgrade the search feature in JLLIS, and according to the Joint Staff (J-7), as of November 2014 the upgrade had been approved and funded. Joint Staff (J-7) officials stated that JLLIS will be enhanced with the integration of IBM Content Analytics software. The software package will provide enhanced search capability that includes keyword and phrase search features, search suggestions, and spelling correction. The estimated timeline for full implementation of the upgrade within JLLIS ends in approximately May 2015. However, the software upgrade will not address the lack of an OCS label or designated OCS community of practice. As a result, the software upgrade will have a limited effect on improving JLLIS’s functionality for searching and sharing OCS issues and lessons learned. According to Chairman of the Joint Chiefs of Staff Instruction 3150.25E, JLLIS is the JLLP’s system of record and facilitates the collection, tracking, management, sharing, collaborative resolution, and dissemination of lessons learned to improve the development and readiness of the joint force. Due to JLLIS’s limited functionality for searching OCS issues and lessons learned, DOD organizations rely instead on forums and systems outside of JLLIS to share OCS issues and lessons learned. By sharing OCS issues and lessons learned using limited distribution channels like e-mails, specific forums, or SharePoint portals, OCS information may not be clearly documented in a single location and readily available to a wider audience for examination, consistent with standards for internal control in the federal government. As a result, until DOD improves the functionality of JLLIS it will be difficult for users to search for OCS issues, and DOD runs the risk of working on duplicative efforts and repeating past mistakes. For example, officials from one geographic combatant command we interviewed reported having difficulty developing a policy for Synchronized Predeployment and Operational Tracker in their area of responsibility. However, other geographic combatant commands, such as U.S. European Command and U.S. Africa Command, have already developed and implemented independent policies for this system throughout their area of responsibility. According to the JLLP, the challenges associated with the policy development should have been entered in JLLIS so other geographic combatant commands would not encounter the same difficulties. As we reported in 2006 and later testified in 2008, when OCS lessons learned are not systematically shared, DOD runs the risk of being unable to build on the efficiencies and effectiveness others have developed during past operations that involved OCS. DOD has spent billions of dollars on contract support since 2002, and while it has taken some positive steps in recent years to institutionalize OCS, the department has experienced challenges in collecting, integrating, and sharing OCS issues and lessons learned. The geographic combatant commands continue to improve efforts to collect OCS issues from operations and exercises, but the military services other than the Army are not generally collecting OCS issues nor is there a requirement for training on OCS lessons learned issues. Actions to improve and develop specific guidance as well as require OCS training for commanders and senior leaders could improve awareness of OCS capabilities and the importance of collecting OCS issues for mission success. Additionally, DOD has made progress integrating some OCS issues, largely as a result of sources outside of the JLLP. With multiple organizations across the department working on separate and sometimes disjointed lessons-learned efforts, the department’s ability to integrate issues from the JLLP remains limited. By not including specific roles and responsibilities related to lessons learned in its concept for the OCS joint proponent, DOD may not be positioned to integrate all OCS issues identified from the JLLP, and may be unable to address any key OCS gaps and shortfalls in its efforts. Further, we found that while JLLIS remains the department’s JLLP system of record, DOD organizations generally rely on systems outside of JLLIS to collect, resolve, and share OCS issues and lessons learned. Until DOD improves the functionality of JLLIS, it will be difficult for users to search for OCS issues, and DOD runs the risk of working on duplicative efforts and repeating past mistakes. In a resource-constrained environment, DOD will continue to depend on contractors to provide increased capacity, capabilities, and skills in the future. However, without more consistent and systematic OCS lessons learned efforts, the department lacks reasonable assurance that it has identified key gaps in OCS capabilities and that it will not repeat past mistakes in future contingencies. We are recommending that the department take five actions to improve efforts to collect, integrate, and share OCS lessons learned. To help improve collection of OCS issues by the military services and service component commands, we recommend that the Secretary of Defense revise existing DOD guidance, such as DOD Instruction 3020.41, to specifically detail the roles and responsibilities of the services in collecting OCS issues. To specifically identify and improve awareness of OCS roles and responsibilities and to collect OCS issues at the military services and the service component commands, we recommend that the Secretary of Defense direct the Secretaries of the Navy and Air Force to include the services’ roles and responsibilities to collect OCS issues in comprehensive service-specific guidance on how the Navy, Marine Corps, and Air Force should integrate OCS. To help improve awareness of OCS roles and responsibilities and to collect OCS issues at the military services and the service component commands, we recommend that the Secretary of Defense direct the Secretaries of the military departments, in coordination with the Chairman of the Joint Chiefs of Staff, to establish an OCS training requirement for commanders and senior leaders. To help improve DOD’s management of OCS lessons learned, we recommend that the Secretary of Defense ensure that, as the department develops a concept for an OCS joint proponent, it include specific roles and responsibilities for a focal point responsible for integrating OCS issues from the Joint Lessons Learned Program. To help improve the functionality of JLLIS for sharing OCS lessons learned, we recommend that, as DOD upgrades JLLIS, the Chairman of the Joint Chiefs of Staff direct the Joint Staff (J-7) and Joint Staff (J-4) to implement an OCS label in JLLIS and designate a single community of practice for OCS in JLLIS. In written comments on a draft of this report, DOD concurred with four of the five recommendations and partially concurred with one recommendation. DOD’s comments are summarized below and reprinted in appendix III. DOD also provided technical comments, which we incorporated where appropriate. DOD concurred with the first recommendation that the Secretary of Defense revise existing DOD guidance, such as DOD Instruction 3020.41, to specifically detail the roles and responsibilities of the services in collecting OCS issues. In its response, DOD stated that specific details regarding the roles and responsibilities of the services in collecting OCS issues will be incorporated in the revised Instruction. We believe that this action, if fully implemented, would meet the intent of the recommendation. DOD concurred with the second recommendation that the Secretary of Defense direct the Secretaries of the Navy and Air Force to include the services’ roles and responsibilities to collect OCS issues in comprehensive service-specific guidance on how the Navy, Marine Corps, and Air Force should integrate OCS. Although DOD stated in its response that the services should take steps to include such guidance, it did not identify any actions DOD would take to direct the services to do so. We believe such direction from the Secretary of Defense to the services, as we recommended, is necessary to ensure roles and responsibilities for collecting OCS issues are adequately and consistently identified in each of the services’ OCS guidance. DOD concurred with the third recommendation that the Secretary of Defense direct the Secretaries of the military departments, in coordination with the Chairman of the Joint Chiefs of Staff, to establish an OCS training requirement for commanders and senior leaders. In its response, DOD stated that the services are developing OCS training requirements for commanders and senior leaders in coordination with the Chairman of the Joint Chiefs of Staff. We believe that this action, if fully implemented, would meet the intent of the recommendation. DOD partially concurred with the fourth recommendation that the Secretary of Defense ensure that, as the department develops a concept for an OCS joint proponent, it include specific roles and responsibilities for a focal point responsible for integrating OCS issues from the JLLP. In its comments, DOD stated that efforts to review and evaluate potential courses of action to establish an OCS joint proponent are under way and upon completion, the department will determine the way ahead. We agree that this is a reasonable approach. However, as we noted in our report, DOD could help ensure that it has a systematic strategy for capturing, retaining, and applying OCS lessons learned by establishing specific OCS lessons-learned responsibilities for a future OCS joint proponent, such as whether it would be responsible for providing formal oversight for integrating OCS issues from the JLLP. Including such roles and responsibilities in the concept for the OCS joint proponent will help better position DOD to integrate all OCS issues identified from the JLLP, thereby addressing any key OCS gaps and shortfalls in its efforts. DOD concurred with the fifth recommendation that, as DOD upgrades JLLIS, the Chairman of the Joint Chiefs of Staff direct the Joint Staff (J-7) and Joint Staff (J-4) to implement an OCS label in JLLIS and designate a single community of practice for OCS in JLLIS. In its response, DOD stated that the Joint Staff is working to develop a single community of practice. However, DOD did not specifically address how it would implement an OCS label in JLLIS. As we noted in our report, establishing a specific OCS label in JLLIS would improve the search capabilities for OCS issues and better enable communication of lessons learned to a broader OCS audience. Accordingly, we believe that DOD also needs to establish such an OCS label in JLLIS to fully address the recommendation. We are sending copies of this report to the appropriate congressional committees. We are also sending copies to the Secretary of Defense, the Chairman of the Joint Chiefs of Staff, and the Secretaries of the military departments. The report is also available at no charge on GAO’s website at http://www.gao.gov. If you or your staff have questions about this report, please contact me at (202) 512-5431 or russellc@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. We performed our work under the Comptroller General’s authority to conduct evaluations at his own initiative, in light of congressional interest in the Department of Defense’s (DOD) efforts to institutionalize lessons learned related to operational contract support (OCS). This report examines the extent to which (1) the geographic combatant commands and the services collect OCS issues to develop lessons learned; (2) DOD has a focal point for integrating OCS issues from the Joint Lessons Learned Program (JLLP); and (3) DOD organizations use the Joint Lessons Learned Information System (JLLIS) to share OCS issues and lessons learned. To address these objectives, we excluded OCS issues and lessons learned from the acquisition community—for example, contracting officers. According to Joint Publication 4-10, the Director of Defense Procurement and Acquisition Policy is responsible for developing and implementing a DOD-wide contingency contracting–related lessons- learned program and ensuring validated lessons from this program are disseminated and incorporated into relevant Defense Acquisition University instruction. To determine the extent to which the geographic combatant commands and the services collect OCS issues to develop lessons learned, we reviewed guidance to understand the roles and responsibilities of these DOD entities regarding the collection of OCS issues and compared them with the information we collected during our interviews to identify the extent to which the geographic combatant commands and the services collect OCS issues. Specifically, we reviewed the relevant provisions in part 158 of Title 32 of the Code of Federal Regulations and DOD Instruction 3020.41, which establish policy, assign responsibilities, and provide procedures for OCS, including OCS program management, contract support integration, and integration of defense contractor We personnel into contingency operations outside the United States. also reviewed Joint Publication 4-10, which provides joint doctrine for planning, executing, and managing OCS in all phases of joint operations. 32 C.F.R. pt. 158; Department of Defense Instruction 3020.41, Operational Contract Support (OCS) (Dec. 20, 2011). Additionally, we reviewed Chairman of the Joint Chiefs of Staff Instruction 3150.25E, which establishes policy, guidance, and responsibilities for the JLLP, to understand the established lessons-learned process. Furthermore, to understand how OCS should be integrated into the geographic combatant commands’ training systems and plans, we reviewed joint training guidance, such as Chairman of the Joint Chiefs of Staff Instruction 3500.01H and Chairman of the Joint Chiefs of Staff Notice 3500.01. Additionally, we reviewed the Joint Concept for OCS, which is intended to guide OCS capability development for the Joint Force 2020. In addition to joint guidance, we reviewed military department and service guidance, such as Army Regulation 11-33, Air Force Instruction 90-1601, Office of the Chief of Naval Operations Instruction 3500.37C, and Marine Corps Order 3504.1, to identify any military department- or service-specific policies, guidance, and responsibilities for the collection of issues. We also interviewed OCS and lessons-learned officials from all six geographic combatant commands, all of the associated military service component commands, and the Army, the Navy, the Air Force, and the Marine Corps to discuss their roles and responsibilities regarding the collection of OCS issues. We visited all six geographic combatant commands to conduct our interviews with them. To determine the extent to which DOD has a focal point for integrating OCS issues learned from the JLLP, we reviewed related GAO reports on OCS, as well as related reports issued by the Office of the Under Secretary of Defense for Acquisition, Technology and Logistics, OCS Functional Capabilities Integration Board, and the Center for Army Lessons Learned. Furthermore, we reviewed the Joint Contingency Acquisition Support Office’s (JCASO) OCS issue collection documents— presentation and reports—to understand the scope of its efforts to integrate OCS lessons learned into doctrine, policy, training, and education. We compared guidance, such as relevant provisions in part 158 of Title 32 of the Code of Federal Regulations, Chairman of the Joint Chiefs of Staff Instruction 3150.25E, DOD Instruction 3020.41, Joint Publication 4-10, and charters for the OCS Functional Capabilities Integration Board, with DOD’s process for the integration of OCS lessons learned. Additionally, we interviewed officials from the Joint Staff and JCASO, which participate in the process of integrating OCS lessons learned in doctrine, policy, training, and education and informing OCS policy and recommending solutions, respectively. We also interviewed officials specifically focused on integrating OCS department-wide, such as officials from the Operational Contract Support Functional Capabilities Integration Board, to obtain their perspective on the progress the department has made in integrating OCS. In addition to these officials, we interviewed officials from each of the services—Army, Navy, Air Force, and Marine Corps—to gain an understanding of how each service has integrated OCS lessons learned from the JLLP. We compared this information to federal internal-control standards that state a good internal- control environment requires that the agency’s organizational structure clearly define key areas of authority and responsibility and establish appropriate lines of reporting. To determine the extent to which DOD organizations have used JLLIS to share OCS issues and lessons learned, we collected and analyzed documentation, such as guidance related to the dissemination of OCS issues and lessons learned. Specifically, we reviewed Chairman of the Joint Chiefs of Staff Instruction 3150.25E, which establishes policy, guidance, and responsibilities for the JLLP, to identify the roles and responsibilities of commanders to share OCS issues and lessons learned and identify the JLLP system of record for sharing those issues and lessons learned. Additionally, we participated in a demonstration of JLLIS led by the Joint Staff (J-7) to understand and observe JLLIS’s function as an information-sharing system, specifically its search and cataloging capabilities. Due to the OCS responsibilities identified in DOD guidance, we also interviewed officials from the Joint Staff (J-4), geographic combatant commands, each of the services, and their respective service component commands to obtain their perspective on JLLIS for sharing OCS issues and lessons learned. To determine the extent to which DOD organizations have used JLLIS to share OCS issues and lessons learned, we interviewed officials from the aforementioned organizations to gain an understanding of how each organization shares OCS issues and lessons learned. We also compared this information to federal internal-control standards that indicate that all significant events should be clearly documented and the documentation readily available. We visited or contacted officials from the following DOD organizations during our review: Defense Logistics Agency, Fort Belvoir, Virginia Joint Contingency Acquisition Support Office, Fort Belvoir, Virginia Chairman of the Joint Chiefs of Staff Joint Staff J-4 (Logistics) Directorate, Washington, D.C. Joint Staff J-7 (Joint Force Development) Directorate, Washington, D.C. Office of the Under Secretary of Defense for Acquisition, Technology and Logistics Office of the Deputy Assistant Secretary of Defense (Program Support), Washington, D.C. OCS Functional Capabilities Integration Board, Washington, D.C. U.S. Africa Command, Stuttgart, Germany U.S. Central Command, Tampa, Florida U.S. European Command, Stuttgart, Germany U.S. Northern Command, Peterson Air Force Base, Colorado U.S. Pacific Command, Camp H.M. Smith, Hawaii U.S. Southern Command, Doral, Florida Office of the Deputy Chief of Staff, G-4 (Logistics), Washington, D.C. U.S. Army Acquisition, Logistics and Technology-Integration Office, U.S. Army Africa, Vicenza, Italy U.S. Army Central, Kuwait City, Kuwait U.S. Army Europe, Wiesbaden, Germany U.S. Army North, San Antonio, Texas U.S. Army Pacific, Fort Shafter, Hawaii U.S. Army South, Fort Sam Houston, Texas Deputy Assistant Secretary of the Navy–Acquisition and Procurement, Washington, D.C. U.S. Fleet Forces Command, Norfolk, Virginia U.S. Marine Corps Headquarters, Washington, D.C. U.S. Marine Corps Forces Central Command, MacDill Air Force Base, U.S. Marine Corps Forces Europe and Africa, Stuttgart, Germany U.S. Marine Corps Forces Pacific, Camp H.M. Smith, Hawaii U.S. Marine Corps Forces Northern Command, New Orleans, U.S. Marine Corps Forces South, Doral, Florida U.S. Naval Forces Central Command, Bahrain U.S. Naval Forces Europe–Africa, Naples, Italy U.S. Naval Forces Southern Command, Naval Station Mayport, U.S. Pacific Fleet, Makalapa, Hawaii Department of the Air Force Office of the Assistant Secretary of the Air Force (Acquisition), Directorate of Contracting, Washington, D.C. U.S. Air Forces Air Combat Command Lessons Learned (A9L), Washington, D.C. U.S. Air Forces Central Command, Shaw Air Force Base, South U.S. Air Forces Europe and Air Forces Africa, Ramstein Air Base, U.S. Air Forces Northern (1st Air Force), Tyndall Air Force Base, U.S. Air Forces Southern (12th Air Force), Tucson, Arizona U.S. Pacific Air Forces, Joint Base Pearl Harbor–Hickam, Hawaii We conducted this performance audit from March 2014 to March 2015 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II contains information presented in figure 1 in a noninteractive format. In addition to the contact named above, Carole Coffey, Assistant Director; Adam Anguiano; Mae Jones; Marcus Lloyd Oliver; Ashley Orr; James A. Reynolds; Michael Shaughnessy; and Michael Silver made key contributions to this report. Warfighter Support: DOD Needs Additional Steps to Fully Integrate Operational Contract Support into Contingency Planning. GAO-13-212. Washington, D.C.: February 8, 2013. Contingency Contracting: Agency Actions to Address Recommendations by the Commission on Wartime Contracting in Iraq and Afghanistan. GAO-12-854R. Washington, D.C.: August 1, 2012. Operational Contract Support: Management and Oversight Improvements Needed in Afghanistan. GAO-12-290. Washington, D.C.: March 29, 2012. Defense Contract Management Agency: Amid Ongoing Efforts to Rebuild Capacity, Several Factors Present Challenges in Meeting Its Missions. GAO-12-83. Washington, D.C.: November 3, 2011. Iraq Drawdown: Opportunities Exist to Improve Equipment Visibility, Contractor Demobilization, and Clarity of Post-2011 DOD Role. GAO-11-774. Washington, D.C.: September 16, 2011. Afghanistan: U.S. Efforts to Vet Non-U.S. Vendors Need Improvement. GAO-11-355. Washington, D.C.: June 8, 2011. Warfighter Support: DOD Needs to Improve Its Planning for Using Contractors to Support Future Military Operations. GAO-10-472. Washington, D.C.: March 30, 2010.
DOD has spent billions of dollars on contract support during operations in Iraq and Afghanistan since 2002 and anticipates continuing its heavy reliance on contractors in future operations. Generally, OCS is the process of planning for and obtaining needed supplies and services from commercial sources in support of joint operations. GAO has previously identified long-standing concerns with DOD's efforts to institutionalize OCS. This report examines the extent to which (1) the geographic combatant commands and the services collect OCS issues to develop lessons learned, (2) DOD has a focal point for integrating OCS issues from the JLLP, and (3) DOD organizations use JLLIS to share OCS issues and lessons learned. GAO evaluated OCS and lessons-learned guidance and plans and met with DOD commands and offices responsible for OCS planning, integration, policy, and contractor-management functions. The Department of Defense's (DOD) geographic combatant commands are improving efforts to collect operational contract support (OCS) issues from operations and exercises needed to develop lessons learned, but the military services are generally not collecting them. Currently, four of the six geographic combatant commands—U.S. Africa Command, U.S. Central Command, U.S. Northern Command, and U.S. Southern Command—have identified OCS as a critical capability in their joint training plans and have incorporated it into planning, execution, and assessment of exercises, while U.S. European Command and U.S. Pacific Command continue to make progress doing so. However, with the exception of the Army, the military services and their component commands are not generally collecting OCS issues to develop lessons learned. Officials from the Air Force, Marine Corps, and Navy stated that the lack of OCS awareness caused by not having (1) service-wide guidance on collecting OCS issues and (2) an OCS training requirement for senior leaders hinders their ability to develop lessons learned. Without guidance and a training requirement for senior leaders to improve OCS awareness, it will be difficult for DOD to ensure consistent collection of OCS issues and build on efficiencies that the services have identified to adequately plan for the use of contractor support. DOD has made progress resolving some OCS issues, but does not have a focal point for integrating OCS issues identified through the Joint Lessons Learned Program (JLLP). The combatant commands and services are to use the JLLP to develop lessons learned related to joint capabilities from operations and exercises to improve areas such as doctrine and training. Currently, there are multiple organizations across DOD that are working on separate and sometimes disjointed OCS lessons-learned efforts. DOD has undertaken initial efforts to assign an OCS joint proponent with lessons-learned responsibilities. A joint proponent is an entity intended to lead collaborative development and integration of joint capability. However, DOD has not determined whether the joint proponent will be responsible for providing formal oversight and integration of OCS issues from the JLLP. As it develops the joint proponent, including such roles and responsibilities will help better position DOD to integrate all OCS issues from the JLLP, thereby addressing any gaps in its efforts. DOD organizations do not consistently use the Joint Lessons Learned Information System (JLLIS) to share OCS issues and lessons learned due to the system's limited functionality. JLLIS is the JLLP's system of record and is to facilitate the DOD-wide collection and sharing of lessons learned. However, GAO found that geographic combatant commands and the Army use JLLIS to varying degrees. Further, DOD is generally not sharing OCS lessons learned in JLLIS because the system is not functional for users searching OCS issues due to, among other reasons, not having an OCS label and not having a designated location for sharing OCS lessons learned. JLLIS's limited functionality impedes information sharing department-wide. Until DOD improves the functionality of JLLIS, it will be difficult for users to search for OCS issues, and DOD runs the risk of not being able to systematically track and share OCS lessons learned department-wide, which could negatively affect joint force development and readiness. GAO recommends, among other things, that DOD and the services (1) issue service-wide OCS lessons-learned guidance; (2) establish an OCS training requirement for senior leaders; (3) ensure the planned OCS joint proponent's roles and responsibilities include integrating OCS issues from the JLLP; and (4) improve JLLIS's functionality. DOD concurred with three of these recommendations, but partially concurred with the third recommendation, stating the need to first evaluate its courses of action before establishing such a proponent. GAO believes this recommendation is still valid, as discussed in the report.
Compared with the traditional Medicare FFS program, HMOs typically cost beneficiaries less money and cover additional benefits. In addition to covering all Medicare part A and part B benefits, advantages of Medicare HMOs typically include low or no monthly premiums, expanded benefit coverage, and reduced out-of-pocket expenses. In effect, the HMO often acts much like a Medicare supplemental policy (Medigap insurance) by covering deductibles, coinsurance, and additional services. On the other hand, beneficiaries may be reluctant to enroll in HMOs because they give up their freedom to choose any provider. If a beneficiary enrolled in an HMO seeks nonemergency care from providers other than those designated by the HMO or seeks care without following the HMO’s referral policy, the beneficiary is liable for the full cost of that care. In addition, beneficiaries may be reluctant to drop Medigap coverage and enroll in an HMO because it may be difficult to obtain supplemental insurance later at a reasonable price if they return to FFS. Because the elderly face a higher risk of serious illness, they may prefer to remain in the FFS program to take advantage of the ability to visit any provider or maintain their relationships with current providers. Medicare HMOs have enrollment procedures that reflect beneficiaries’ freedom to move between the FFS program and HMO plans. Medicare rules allow beneficiaries to select any of the federally approved HMOs in their area and to switch plans or to return to the FFS program monthly. Beneficiaries who otherwise would be reluctant to try an HMO know they can easily leave if a plan does not meet their expectations. Because of this freedom to change plans every 30 days, disenrollments can indicate enrollee dissatisfaction with an HMO. Beneficiaries can also shift to HMOs to get specific benefits when needed and then disenroll with ease to return to FFS. Because enrolling more beneficiaries enables HMOs to spread their risk and better ensure profitability, recruiting or retaining beneficiaries in a plan is important. HMOs’ marketing strategies often call attention to the size and geographic scope of the provider network and the quality of physicians in the network. However, as we have previously reported, some HMO sales agents have misled beneficiaries or used otherwise questionable sales practices to attract new enrollees. For a number of reasons, it would be expected that beneficiaries with chronic conditions would be drawn to HMO plans. HMOs have the potential to provide a range of integrated services required by such people. Ideally, HMO providers should have the flexibility to treat patients with chronic conditions or refer them to an appropriate mix of medical and nonmedical services. They have a financial incentive for keeping people healthy and as fully functioning as possible. To avoid use of emergency room and costly acute-care services, HMOs often emphasize prevention services that address the development or progression of disease complications. The combination of more extensive benefits and lower costs was evident in the benefit packages offered by the five largest California Medicare HMOs (accounting for 83 percent of the state’s enrollment). In 1994, these plans offered zero to $30 monthly premiums; hospital coverage in full with unlimited days; physician and specialist visits with a copayment of $5 or less; emergency room care, in or out of the area, with a copayment of $5 to $50 (waived if admitted to the hospital); coverage for preventive health services, including an annual exam, eye glasses, routine eye and hearing tests, and health education; outpatient pharmacy coverage in three of the five plans, with copayments of $5 to $7 per prescription and an annual cap from $700 to $1,200; and outpatient mental health services with a copayment of $10 to $20 per visit, in most cases. Despite these extra benefits of HMOs, California Medicare beneficiaries with chronic conditions were less likely to enroll in an HMO than beneficiaries without any of the selected conditions. As a result, the new enrollee group had, on the whole, better health status than those who stayed in FFS. HMO enrollment typically involves only a fraction of FFS beneficiaries each year. Between January 1993 and December 1994, 16.4 percent of the beneficiaries in our decision-making cohort enrolled in an HMO. But beneficiaries with a single chronic condition were 19 percent less likely to join an HMO than those without any of the selected conditions, and those with multiple chronic conditions enrolled at a rate 27 percent below those with none of the conditions. One reason beneficiaries with chronic illnesses may be reluctant to enroll in an HMO is because they are more likely than nonchronic beneficiaries to have established provider relationships. In addition, because HMOs require that a primary care physician or “gatekeeper” decide when a patient needs a specialist or hospitalization, these beneficiaries may be particularly concerned about their access to specialty providers. Beneficiaries diagnosed with chronic conditions may prefer to remain in the FFS program to take advantage of the ability to visit any provider or to maintain relationships with current providers. Within each health status group, HMO enrollment rates declined with age. This may indicate that younger seniors are more familiar with HMOs and thus less reluctant to try them or that they have less severe medical problems and are more willing to switch physicians, if necessary. Reflecting both age and health status, beneficiaries over 85 years old who had multiple chronic conditions enrolled at about half the rate of those aged 65 to 69 without any of the conditions. (See table 1.) Comparing the two groups of beneficiaries, those who enrolled in an HMO and those who remained in FFS, we found that a larger proportion of the enrolled group had better health status. Whereas beneficiaries with none of the selected chronic conditions represented 49 percent of those staying in FFS, they represented 57 percent of the group enrolling to HMOs. Conversely, the share with multiple conditions was 26 percent greater in the group remaining in FFS than in the group joining an HMO. (See table 2.) Among the 12 California Medicare HMOs receiving the largest number of new enrollees from FFS, the health status of most plans’ new enrollees resembled aggregate patterns. However, at one plan, 22.2 percent of its new enrollees had two or more selected chronic conditions. At another plan, 8.6 percent of its new enrollees had two or more chronic conditions. Not only were the enrollment rates for beneficiaries with chronic conditions lower than those with none of the selected conditions, but the prior costs of those who enrolled were substantially less than those who remained in FFS. As a result, the average cost of new enrollees was nearly one-third below the cost of FFS beneficiaries that did not enroll. New enrollees with chronic conditions are potential heavy users of expensive health care services in HMOs. Preenrollment data indicate that new enrollees with the selected chronic conditions had considerably higher FFS costs than those without one of the chronic conditions. On average, 1992 FFS costs for new enrollees were more than twice as high for beneficiaries with a single chronic condition compared with persons with none. Having multiple chronic conditions dramatically increased the prior cost of care among new enrollees, rising to 7 times the per capita costs of persons with none of the conditions. Even when the age of the beneficiary was taken into account, those with more than one chronic condition had substantially higher costs. For example, the 1992 average monthly FFS cost for new enrollees 70 to 74 years old ranged from $74 for individuals with none of the selected conditions to $565 for those with two or more conditions. (See table 3.) The enrollment patterns show that Medicare HMOs attracted people who did not need as costly medical care. Beneficiaries who enrolled in an HMO in 1993 or 1994 had substantially lower 1992 costs compared with those that remained in FFS during that period. As a group, new enrollees cost 29 percent less than those who did not join an HMO. This pattern of drawing new HMO enrollees from FFS beneficiaries with low costs held true for each of the health status categories. The differences in prior costs ranged from 31 percent among those with no chronic conditions to 16 percent for those with multiple chronic conditions. (See table 4.) Medicare beneficiaries voluntarily disenroll from HMOs for a variety of reasons. A 1996 Mathematica Policy Research, Inc., survey found that disenrollees to FFS who had been in their plan for 6 months or less were more likely than longer-term stayers to cite their reasons for disenrolling as dissatisfaction with the choice of primary care physicians, a misunderstanding of HMO rules, and an inability to obtain appointments when needed. High early disenrollment rates may reflect beneficiaries’ lack of familiarity with the HMO concept. For example, a beneficiary may realize only after joining a plan that it does not pay for care from an out-of-network provider. These early disenrollees were more likely to return to FFS Medicare, while beneficiaries who disenrolled after a longer period were more likely to join other risk plans. Early disenrollees to FFS were a small group relative to all new enrollees. The vast majority of new enrollees, 91.5 percent, were still enrolled in their HMO 6 months after joining their plan. Within this brief period, 6 percent returned to FFS and 2.5 percent switched to another HMO. New HMO enrollees with chronic conditions rapidly disenrolled and returned to FFS at higher rates than healthier new enrollees. The early disenrollment rates were highest among those with multiple chronic conditions, which might indicate greater access barriers and less satisfaction with HMOs for such beneficiaries. Those with two or more of the selected conditions disenrolled at a rate more than twice that of new enrollees with none of the conditions. Also, a greater proportion of older seniors disenrolled than younger beneficiaries, regardless of health status. (See table 5.) In the 12 plans enrolling most of new enrollees, the early disenrollment rates for beneficiaries in each health status group exhibited a fairly consistent pattern. At most plans, beneficiaries with two or more of the selected chronic conditions disenrolled at about twice the rate of new enrollees with none of the conditions. However, the disenrollment rates for new enrollees with no chronic conditions ranged from 1.8 percent to 15.4 percent. For beneficiaries with two or more of the selected conditions, disenrollment rates varied even more widely, from 3.3 percent at one plan to 34.4 percent at another. Taking the enrollment and disenrollment rates together, we found that those beneficiaries who were least likely to enroll in an HMO were also those that were most likely to disenroll early. For example, among beneficiaries 70 to 74 years old with multiple chronic conditions, 13.8 percent enrolled in an HMO and 10.0 percent of those beneficiaries disenrolled early. This compares with 18.6 percent and 4.2 percent, respectively, for beneficiaries of the same age group with none of the conditions. This pattern of early disenrollment accentuates the health status differences between those who joined an HMO and those who remained continuously enrolled in FFS. Most of the disenrollees returning to FFS, 58 percent, had at least one of the selected chronic conditions. The composition of the group that stayed on in their HMO had better health status, with 42 percent having a chronic condition. (See table 6.) The higher early disenrollment rate for those with multiple chronic conditions reinforces the cost implications of an underrepresented enrollment of beneficiaries with chronic conditions. Disenrollment appears to winnow many of the highest cost beneficiaries out of the newly enrolled HMO population, widening the gap between FFS and managed care. Prior Medicare expenditures for early disenrollees ranged from $132 per month for those with none of the selected conditions to $690 for those with multiple conditions (see table 7). Costs generally increased with age for beneficiary groups with none or one of the selected chronic conditions. However, among disenrollees with multiple conditions, younger seniors had the highest costs. Compared with the prior cost of new enrollees (shown in table 3), the disenrollees’ prior costs were higher in every health status group. On average, 1992 costs were 66 percent higher for early disenrollees than for new enrollees. Comparing the two groups of beneficiaries, those who disenrolled early also had substantially higher 1992 costs than those remaining in their HMO. This was true for all the health categories. The weighed average cost for beneficiaries who returned to FFS was 79 percent more than those who stayed on in an HMO. (See table 8.) The low prior costs of those who enrolled in an HMO and remained there for more than 6 months are in sharp contrast to costs for those who stayed in FFS continuously for the 24-month period (as shown in table 4). Longer-term HMO enrollees had far lower preenrollment costs than the FFS stayers, with cost differences ranging from 20 percent lower among beneficiaries with multiple chronic conditions to 34 percent lower for those with none of the conditions. Compared with healthier beneficiaries, California Medicare beneficiaries with selected chronic conditions were less likely to enroll in HMOs and more likely to rapidly disenroll from HMOs. This pattern was evident despite the fact that California HMOs’ coverage of more services (particularly preventive care and prescription drugs) with less cost-sharing would be expected to attract beneficiaries with chronic conditions. Furthermore, the debate about the better health status of HMO enrollees hinges on a subtle point, but one that has significant cost implications. That is, beneficiaries grouped within health status categories—the presence of zero, one, or multiple chronic conditions—incur a range of costs depending on the severity of their chronic condition(s) or the presence of other conditions (not accounted for in this analysis). Those at the low end tend to be the new HMO enrollees, whereas those at the high end are likely to remain in FFS. Thus, this study helps explain a pattern of favorable selection in California Medicare HMOs despite the presence of some new enrollees with chronic conditions. We provided copies of a draft of this report to health care analysts at HCFA, the Physician Payment Review Commission, and the Prospective Payment Assessment Commission. They generally agreed with the information presented and offered some technical suggestions that we incorporated where appropriate. As arranged with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from the date of this letter. At that time, we will send copies to interested parties and make copies available to others on request. Please call me on (202) 512-7119 if you or your staff have any questions. Other major contributors to this report include Rosamond Katz, Robert Deroy, and Rajiv Mukerji. This appendix describes our (1) scope and data sources, (2) methodology for identifying Medicare fee-for-service (FFS) beneficiaries with selected chronic conditions, and (3) methodology for analyzing the health maintenance organization (HMO) enrollment and disenrollment patterns of FFS beneficiaries. Our study is an analysis of HMO enrollment and disenrollment patterns in 14 counties in California from January 1993 through June 1995. We chose California because it has been the hub of Medicare HMO activity nationwide. In 1995, over 40 percent of all Medicare beneficiaries enrolled in risk contract HMOs resided in the state. California had 32 HMOs with Medicare risk contracts, including 5 of the nation’s 7 plans that had the largest number of beneficiaries enrolled. We selected California counties where opportunities for enrollment were not limited by HMO participation. The 14 counties included in our study each had at least one risk contract HMO operating within its boundaries, and 10 counties listed two or more Medicare HMOs. In addition, all of the counties had over 1,000 Medicare beneficiaries enrolled in risk contract HMOs and together accounted for 99.2 percent of California risk contract HMO enrollment. As a result of substantial HMO enrollment growth, several of these counties had high Medicare HMO market penetration rates (the proportion of Medicare beneficiaries enrolled in an HMO) in 1994: San Bernardino (47 percent), Riverside (47 percent), San Diego (42 percent), and Orange (36 percent). We used the Health Care Financing Administration’s (HCFA) Enrollment Database (EDB) file to select a cohort of FFS beneficiaries who lived in the 14-county area in December 1992. The EDB is the repository of enrollment and entitlement information of anyone ever enrolled in Medicare. It contains information on a beneficiary’s age, sex, entitlement status, state and county of residence, and HMO enrollment history. To focus on the enrollment behavior of people who had no recent HMO experience, we identified beneficiaries who were eligible for Medicare part A and part B for all of 1992 but were not in an HMO at any point during that year. We further narrowed the cohort by excluding patients with end-stage renal disease and those entitled to Medicare benefits because they were disabled and under 65 years old. We used HCFA’s Standard Analytic Files (SAF) to determine Medicare’s payments for each FFS beneficiary. The SAFs contain final action claims data for various types of Medicare-covered services, including inpatient hospital, outpatient, home health agency, skilled nursing facility, hospice, physician/supplier, and durable medical equipment. We obtained expenditure information from the “payment amount” portion of the claim and added pass-through and per diem expenses to the payment amount for inpatient claims. From the claim files, we computed 1992 monthly average expenditures for each beneficiary enrolled in FFS throughout 1992. Individual expenditure information was combined with EDB data to produce a single enrollment and expenditure file containing information on 1,270,554 California FFS Medicare beneficiaries. We also used claims information contained in the SAFs to determine the health status of each beneficiary, as measured by the presence or absence of any of five chronic conditions; that is, whether a claimant had been diagnosed with zero, one, or two or more of the chronic conditions. The chronic conditions included in this analysis were diabetes mellitus, ischemic heart disease, congestive heart failure, hypertension, and chronic obstructive pulmonary disease. These five conditions were identified by Medicare officials as ranking among the most highly prevalent in the elderly population and generating the highest costs to the program. For each cohort beneficiary, we screened 1991 and 1992 inpatient, outpatient, skilled nursing facility, home health agency, and physician/supplier claims for diagnoses (3-digit ICD-9 codes) related to the five chronic conditions. A beneficiary was classified as having a given chronic condition if he or she had one or more hospital claims with a diagnosis of any of the five chronic conditions, two or more other claims with the diagnosis of diabetes mellitus or chronic obstructive pulmonary disease, or three or more other claims with the diagnosis of hypertension, ischemic heart disease, or congestive heart failure. We then summarized the information for each beneficiary to determine if he or she had zero, one, or two or more chronic conditions. We analyzed information contained in the EDB to determine the cohort’s HMO enrollment patterns from January 1993 to December 1994. For each beneficiary, there were four possible occurrences: death, change of residence (out of county), enrollment in an HMO, or 24 months of continuous enrollment in FFS. If the first occurrence for any beneficiary was death or a move, we excluded those beneficiaries from further analysis. During the period, the proportion who died was 6.2 percent for those with none of the selected conditions, 9.6 percent for those with one condition, and 18.6 percent for those with two or more conditions; the percentage who moved was about 5 percent for each health status group. Excluding beneficiaries who died or moved during the 2-year period reduced the size of the cohort to 1,074,819 beneficiaries. We then calculated their 1992 average monthly FFS expenditures, by number of chronic conditions and age group, and the proportion of the remaining beneficiaries that enrolled in an HMO. This 24-month requirement made our pool of potential enrollees a somewhat healthier group than otherwise, and therefore, our estimates of HMO enrollment rates were more favorable than if this requirement were not a criterion for inclusion. Also, because people in their last 12 months of life have costs that are significantly higher than those of other Medicare beneficiaries, the health status and 1992 average costs for those who stayed in FFS was below what they would be if a less stringent criterion were used. To determine the early disenrollment rates, we tracked those beneficiaries who joined an HMO (175,951) for 6 months after they enrolled using January 1993 to June 1995 EDB information. Disenrollments may occur for administrative reasons (the individual died or moved out of the HMO’s service area) or voluntarily (to return to FFS or switch to another HMO). We excluded from further analysis those beneficiaries who disenrolled for administrative reasons, leaving a cohort of 14,455 who voluntarily disenrolled within 6 months. We then calculated the proportion of beneficiaries who chose to return to FFS and their 1992 average monthly FFS expenditures, for each health status and age group. We conducted our review of enrollment and disenrollment patterns between April 1996 and June 1997 in accordance with generally accepted government auditing standards. Chronic conditions may begin in middle age but often progress in terms of severity of symptoms and the degree to which they limit a person as the person ages. Many people with any kind of a chronic condition have more than one condition to manage, further adding to their health care burden. Those who are chronically ill have substantially higher utilization of health care services, accounting for a large share of emergency room visits, hospital admissions, hospital days, and home care visits. This appendix presents 1992 data on the proportion of California FFS beneficiaries that had selected chronic conditions and how their costs compared with those without the conditions. In 1992, about 660,000 or one-half of the elderly Californians in our cohort were identified as having diabetes, ischemic heart disease, congestive heart failure, hypertension, or chronic obstructive pulmonary disease. Of these, about 40 percent had more than one of these chronic condition. As shown in table II.1, the prevalence of these conditions is greatest among the oldest of the elderly. For example, for those over 75 years old, one in three beneficiaries had a single chronic condition and at least one in four had two or more of these chronic conditions. There were substantial cost differences between beneficiaries who had none, one, or several of the selected conditions. The average cost for a beneficiary with multiple chronic conditions was over 6 times the cost for a beneficiary with none of the conditions, and more than twice the cost for a beneficiary with only one of the conditions. As shown in table II.2, even within the same age group, costs varied widely across health status groups. Medicare HMOs: HCFA Can Promptly Eliminate Hundreds of Millions in Excess Payments (GAO/HEHS-97-16, Apr. 25, 1997). Medicare HMOs: Rapid Enrollment Growth Concentrated in Selected States (GAO/HEHS-96-63, Jan. 18, 1996). Medicare Managed Care: Growing Enrollment Adds Urgency to Fixing HMO Payment Problems (GAO/HEHS-96-21, Nov. 8, 1995). Medicare: Changes to HMO Rate Setting Methods Are Needed to Reduce Program Costs (GAO/HEHS-94-119, Sept. 2, 1994). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO examined a mature managed care market to determine: (1) the extent to which Medicare beneficiaries with chronic conditions enroll in health maintenance organizations (HMO); (2) whether beneficiaries with chronic conditions who enroll in HMOs are as costly as those remaining in fee-for-service (FFS) Medicare; and (3) whether beneficiaries with chronic conditions rapidly disenroll from HMOs to FFS at rates different from other newly enrolled beneficiaries. GAO noted that: (1) data on California's FFS beneficiaries who enrolled in HMOs help explain why, despite the presence of chronic conditions among new HMO enrollees, their average costs are lower than the average FFS beneficiary; (2) the health status of beneficiaries, as measured by the number of selected chronic conditions they have, showed significant differences between those who enrolled in an HMO and those who remained in FFS; (3) also, when comparing beneficiaries categorized by the presence of none, one, or multiple chronic conditions, new HMO enrollees tended to be the least costly in each health status group; (4) this resulted in a substantial overall cost difference between those that did and did not enroll in HMOs; (5) about one in six 1992 California FFS Medicare beneficiaries enrolled in an HMO in 1993 and 1994; (6) HMO enrollment rates differed significantly for beneficiaries with selected chronic conditions compared to other beneficiaries; (7) among those with none of the selected conditions, 18.4 percent elected to enroll in an HMO compared to 14.9 percent of beneficiaries with a single chronic condition and 13.4 percent of those with two or more conditions; (8) GAO found that prior to enrolling in an HMO a substantial cost difference, 29 percent, existed between new HMO enrollees and those remaining in FFS because HMOs attracted the least costly enrollees within each health status group; (9) even among beneficiaries belonging to either of the groups with chronic conditions, HMOs attracted those with less severe conditions as measured by their 1992 average monthly costs; (10) GAO found that rates of early disenrollment from HMOs to FFS were substantially higher among those with chronic conditions; (11) while only 6 percent of all new enrollees returned to FFS within 6 months, the rates ranged from 4.5 percent for beneficiaries without a chronic condition to 10.2 percent for those with two or more chronic conditions; (12) also, disenrollees who returned to FFS had substantially higher costs prior to enrollment compared to those who remained in their HMO; and (13) these data indicated that favorable selection still exists in California Medicare HMOs because they attract and retain the least costly beneficiaries in each health status group.
MDBs are autonomous international financial entities that finance economic and social development projects and programs in developing countries. All members participate in oversight and the setting of operating policies of the MDBs through their participation on the boards of governors. The MDBs primarily fund these projects and programs using money borrowed from world capital markets or money provided by governments of member countries. Because of the MDBs’ favorable credit ratings, those MDBs that borrow funds from world capital markets are able to obtain more favorable loan terms than their borrowers could otherwise negotiate. Thus, MDBs enable developing countries to access foreign currency resources on more advantageous terms than would be available to them solely on the basis of their own international credit standing. MDBs are not commercial “banks” in the traditional sense of the term because they do not seek to maximize profits and they do not take customer deposits to fund their operations. The MDBs provide assistance in the form of loans, equity investments, loan and equity guarantees, and technical assistance. The primary vehicle of development assistance is direct lending. Most loans are issued with market-based interest rates; however, some MDBs offer loans at concessional (less than current market) rates to the poorest of the developing countries. MDB loans are available in various currencies to their member countries or, in some cases, to private enterprises, for development projects in a borrowing member country. In some cases, a member country guarantees loans made to private sector enterprises within the member country and, as a result, may be held liable for any defaulted loans. The United States is the largest member in most of the MDBs discussed in this report, contributing significant amounts to support the missions of the MDBs and subscribing a significant amount of the MDBs’ callable capital.The Congress appropriates funds for the United States’ contributions to the MDBs. In fiscal year 2001, the Congress appropriated about $1.0 billion for the MDBs, with the largest contribution, $775 million, going to the World Bank Group’s International Development Association. During fiscal year 2001, the Congress also authorized up to $271 million of new subscriptions to the MDBs’ callable capital. The Department of the Treasury oversees the United States’ interests in the MDBs. The United States is a member of the following MDB groups, which are included in this report: (1) the World Bank Group, (2) the African Development Bank Group, (3) the Asian Development Bank, (4) the Inter-American Development Bank, and (5) the European Bank for Reconstruction and Development. In their most recent fiscal year of operations for which information was available, the MDBs we reviewed approved about $40.1 billion of development assistance consisting of loans, loan guarantees, and equity investments for economic and social development. The Latin American and Caribbean region received the largest portion of this development assistance, approximately $15.8 billion, while the Asian and Pacific region received $11.9 billion, Europe and Central Asia received $6.3 billion, and Africa received $6.1 billion. The World Bank Group accounted for about 53 percent of the MDBs’ total development assistance provided during this period, while the regional MDBs, which focus their development activities on a particular region, provided the remaining 47 percent. Loans with market-based interest rates, equity investments, and loan guarantees accounted for about $33.5 billion of the total financial support provided by these MDBs during the most recent fiscal years of operations, while concessional lending amounted to about $6.6 billion. We received comments of a technical nature from Treasury officials, and these comments have been incorporated in the report where appropriate. African Development Bank Group African Development Bank African Development Fund Ordinary Capital Resources Asian Development Fund Ordinary Capital Fund for Special Operations European Bank for Reconstruction and Development For our report, we analyzed and compiled information from the MDBs’ annual reports and their audited financial statements for the most recent 3 fiscal years for which information was available as of January 1, 2001. We also reviewed the MDBs’ Articles of Agreement. Other data sources included the Congressional Research Service and Standard & Poor’s. The Standard & Poor’s information we used relates to the credit ratings of several MDBs as of September 2000, criteria used to assess several of the MDBs, credit ratings of various borrowing members, and credit rating definitions. This information was used with the permission of Standard & Poor’s. The most recent fiscal years of operations for which data were available from the annual reports and audited financial statements of the regional MDBs and the related entities of the World Bank Group were for the years ending December 31, 1999, and June 30, 2000, respectively. Our work focused on the MDBs and the related entities listed above; our work did not cover the other special funds operated by the MDBs. To the extent possible, we used data audited by the MDBs’ external auditors. For comparability, we converted financial data from the African Development Bank Group and the European Bank for Reconstruction and Development to the U.S. dollar equivalent. When calculating the financial information provided for the MDBs, we made the following adjustments to data items: Loans outstanding include disbursed loans and equity investments, except where noted. Net disbursed loans represent loans outstanding less the estimated loan loss allowance. Nonaccrual loans include the principal portion only. Undisbursed loans include loans that are committed but not yet disbursed. Paid-in capital excludes amounts not yet due and receivables from members related to subscribed capital. Because MDBs follow different accounting standards to prepare financial data, use different methodologies to estimate loan losses, and have different policies relating to nonaccrual loans, caution must be taken when comparing financial results. Further, MDBs serve different purposes and borrowers, which also affects the comparability of financial data. We conducted our work in Washington, D.C., from January 2001 through April 2001 in accordance with generally accepted government auditing standards. On May 11, 2001, we received comments from cognizant Treasury officials and have incorporated those views and other technical suggestions into our report, where appropriate. Multilateral Development Banks (MDB) are autonomous international financial entities that finance economic and social development projects and programs and provide technical assistance in developing countries primarily using money borrowed from world capital markets or contributed by governments of developed countries. Governments are the shareholders—referred to as members—of the MDBs. MDB members include developing countries that borrow from the MDB as well as industrialized member countries. Through their participation on the boards of governors all members, including borrowing members, contribute to the capital of the MDBs and participate in oversight and in the setting of operating policies. The ability of the MDBs to borrow funds at more favorable loan terms than the loan recipients could otherwise negotiate gives developing countries access to foreign currency resources on more advantageous terms than would be available to them solely on the basis of their own international credit standing. Several of the MDBs have created separate entities or funds to carry out specific types of development assistance, such as market-based lending or private sector investments. Table 1 shows each MDB group and its related entities. The World Bank Group draws its membership from developing and industrialized countries around the world. It includes four institutions, each providing a different function in carrying out the World Bank Group’s mission of fighting poverty and improving the standard of living in developing countries throughout the world. The International Bank for Reconstruction and Development (IBRD)—the oldest and, based on total assets, largest MDB—is the World Bank entity that provides market-based loans, guarantees, and technical assistance to middle-income member countries and more creditworthy poorer member countries. The International Development Association is the World Bank’s concessional lending arm that provides key support for the bank’s poorer members. The International Finance Corporation provides loans, equity investments, and technical assistance for private sector enterprises. The Multilateral Investment Guaranty Agency provides guarantees to foreign investors against loss caused by noncommercial risks, as well as technical assistance to host governments. The remaining four MDB groups are referred to as regional development banks because they focus their development activities on a particular region. The regional development banks’ membership consists of developing or borrowing countries within a particular region of the world plus industrialized member countries located throughout the world. Four regional MDBs—and their lending arms that provide concessional lending or equity investments—included in this report are described below. The African Development Bank Group includes the following two entities, which serve the development needs of Africa. The African Development Bank provides market-based loans, equity investments, loan guarantees, and technical assistance to the public and private sector. The African Development Fund is the concessional lending arm that provides technical assistance and key support for the bank’s poorer members. The Asian Development Bank includes the following two operational lending arms, which serve the development needs of the Asian and Pacific regions. The Asian Development Bank’s Ordinary Capital Resources provides market-based loans, equity investments, and loan guarantees, and it indirectly provides technical assistance to middle-income countries and creditworthy poorer countries. The Asian Development Fund is the concessional lending arm that provides technical assistance and key support for the bank’s poorer members. The Inter-American Development Bank group includes the following three lending arms, which serve the development needs of Latin America and the Caribbean. The Inter-American Development Bank’s Ordinary Capital provides market-based loans, guarantees, and technical assistance to the public and private sectors. The Inter-American Development Bank’s Fund for Special Operations is the concessional lending arm that provides technical assistance and key support for the bank’s poorer members. The Inter-American Investment Corporation is the Inter-American Development Bank group’s entity that provides loans, equity, and technical assistance to small and midsize private enterprises. The European Bank for Reconstruction and Development serves central and eastern Europe and the Commonwealth of Independent States. It provides development assistance through market-based loans, cofinancing, loan guarantees, equity investments, and technical assistance to the public and private sector. The MDBs included in our analysis approved about $40.1 billion of financial assistance to developing countries during the most recent fiscal year of operations for which data were available at January 1, 2001. The Latin American and Caribbean region received the largest portion of this development assistance with approximately $15.8 billion, while the Asian and Pacific region received $11.9 billion, Europe and Central Asia received $6.3 billion, and Africa received $6.1 billion. The World Bank Group accounted for about 53 percent of the development assistance during this period, while the regional MDBs provided the remaining 47 percent. Loans with market-based interest rates, equity investments, and loan guarantees accounted for about $33.5 billion of the MDBs’ total financial support provided during the most recent fiscal years of operations, while concessional lending amounted to about $6.6 billion. The MDBs provide assistance in the form of loans, loan and investment guarantees, equity investments, and technical assistance. The primary vehicle for development assistance is market-based lending to member countries. Most loans are issued with market-based interest rates. However, as indicated in table 2, four of the five MDB groups offer loans at concessional rates, which are generally between zero and 4 percent. Concessional lending is provided to the poorest of the developing countries. Table 2 summarizes the type of development assistance provided by each of the MDB groups and their related entities. MDBs issue loans, in various currencies, to the sovereign member countries and private sector enterprises for development projects within borrowing member countries. In some cases, a member country guarantees loans made to private sector enterprises for projects within the member’s country, and as a result, the member itself may be held liable for defaulted loans. MDBs have lending policies that state that no further loan disbursements will be granted to borrowing members if any of the MDB loans to or guaranteed by the member country are in default. This gives the borrowing or guaranteeing member strong incentives to maintain timely loan repayments to the MDBs. Member countries must ensure that loans they have guaranteed also remain current. Therefore, the MDB is given preferred creditor status by member governments. Generally, this preferential status does not affect the MDBs’ loans to the private sector when there is no guarantee by a member country. Operations of MDBs that provide loans with market-based rates are financed primarily through borrowings from world capital markets, members’ paid-in capital, and retained earnings. Members also provide capital through subscriptions to callable capital, which resemble promissory notes from member countries to honor MDB debts if the MDB cannot otherwise meet its obligations through its other available resources. Calls on this type of capital are uniform based on all callable capital shares outstanding. Since member countries make payments independent of each other, if the amount received on a call were insufficient to meet the obligations for which the call was made, the MDB would make further calls until the amounts received were sufficient to meet its obligations. However, no member may be required to pay more than the unpaid balance of its total callable capital subscribed. To date, there has never been a call on this capital for any of the MDBs included in our report. Callable capital may only be used when necessary to pay obligations of the MDB; it may not be used to fund new loans. Because of the significant proportion of callable capital that is subscribed by members with strong credit ratings, including the United States, MDBs are able to use callable capital as backing to obtain very favorable financing terms when borrowing from world capital markets. This allows the MDBs to lend much more than the amount of capital paid in by members and at more advantageous terms than would be available to borrowers solely on the basis of their own credit standings. MDBs that have callable capital from members include the International Bank for Reconstruction and Development, the Multilateral Investment Guaranty Agency, the African Development Bank, the Asian Development Bank’s Ordinary Capital Resources, the Inter-American Development Bank’s Ordinary Capital, and the European Bank for Reconstruction and Development. Dealing in various currencies and borrowing funds to finance lending operations exposes these MDBs to market risk, which is comprised of interest rate risk and exchange rate risk. To minimize these risks, MDBs (1) match the maturities of their assets and liabilities, (2) often set and semiannually adjust interest rates on loans based on their cost of borrowing funds, and (3) attempt to match the currency composition of their lending and borrowing portfolios. In addition, because of the developmental nature of their operations, the MDBs are exposed to credit risk from lending to low- and middle-income countries, which generally have lower credit quality. Each of the MDBs has established lending policies that attempt to minimize credit risk, including the suspension of further disbursements to members that have outstanding loans that are nonperforming or in a nonaccrual status. The lending arms of the MDBs that provide concessional rate loans to the poorest of the developing countries—those meeting certain eligibility requirements—are financed through capital contributions from member countries and borrower repayments of outstanding loans. Due to the nature of concessional lending, these entities do not have callable capital subscriptions and do not borrow from world capital markets to finance their operations. Unlike the market-based lending arms of the MDBs, which borrow from world capital markets to fund lending, concessional lending arms rely on capital replenishments or periodic contributions by members in order to continue lending operations. As a result, these entities do not have the same interest rate risks associated with borrowing in the marketplace to fund lending as do the MDBs that provide market-based lending through leveraging high-quality callable capital. However, due to the nature of concessional lending to the poorest of the developing countries, these entities are exposed to considerable credit risks. Because of the credit risk of their borrowers and the extended maturity structure of this type of lending, the concessional lending arms discussed in this report, except for the Asian Development Fund, do not estimate allowances for possible losses related to their loan portfolios. The private sector MDB affiliates that provide equity investments rely on members’ paid-in capital contributions, as opposed to callable capital, to finance equity investment activities. However, these entities also borrow from world capital markets to finance lending operations. These affiliates include the International Finance Corporation of the World Bank Group and the Inter-American Investment Corporation. The Heavily Indebted Poor Countries (HIPC) Initiative is a debt relief program aimed at the world’s poorest nations. The World Bank Group and the International Monetary Fund proposed the HIPC Initiative in 1996 in response to a call from leaders of major industrial nations for a comprehensive approach to the debt problems of the poorest countries. Enhancements to the HIPC Initiative were implemented in 1999. The initiative was designed as a coordinated approach to reduce to sustainable levels the external debt burden of the most heavily indebted countries.The goal of the initiative is expected to provide $29.3 billion in debt relief to 32 eligible countries. Debt relief is linked to the support of economic and social programs designed to reduce poverty. The initiative calls for countries to prepare a comprehensive “country owned” poverty reduction strategy before completing the program. The MDBs that participate in the HIPC Initiative include the International Bank for Reconstruction and Development, the International Development Association, the African Development Bank Group, and the Inter-American Development Bank. The participation of some multilateral institutions is financed through a trust fund administered by the International Development Association of the World Bank Group. The HIPC Trust Fund receives contributions from participating countries. The HIPC Trust Fund’s operations and assets are completely separate from those of the International Development Association. The fund can prepay or purchase a portion of debt owed to participating MDBs and cancel such debt, or it can pay debt service as it comes due. In fiscal year 2001, the Congress authorized to be appropriated during the period of October 1, 2000, through September 30, 2003, $435 million for U.S. contributions to the HIPC Trust Fund. An MDB’s activities are overseen through a board of governors, with a governor from each member. In general, a board of governors is responsible for admitting new members, increasing or decreasing capital, suspending members, authorizing agreements for cooperation with other international organizations, making decisions about the board of executive directors, approving the bank’s financial statements, determining the reserves and the distribution of profits, and making decisions about the scope of the MDB’s operations. Each of the MDBs also has a board of executive directors to whom the board of governors has delegated oversight of day-to-day operations. In general, each board of executive directors is responsible for ensuring the implementation of the decisions of the board of governors; making decisions concerning loans, guarantees, investments, technical assistance, and borrowing funds; submitting accounts to the board of governors; and approving the budget of the bank. The MDB’s daily operations are carried out by its own management and staff of international civil servants. Generally, MDBs were established pursuant to articles of agreement that specify the MDB’s purpose, operations, capital structure, and organizational policies. The articles outline the bank’s conditions for borrowing and lending activities, including the loan approval process; determine how voting shares are allocated to members (such allocations are generally based on a member’s subscribed capital or total contributions); establish the status, immunity, and privileges of the MDB; and provide for the disposition of currencies available to the bank, the withdrawal and suspension of members, and the suspension and termination of the bank’s operations. As international financial entities, MDBs are not subject to supervision or oversight by national financial regulators. The MDBs’ own boards of executive directors are responsible for setting rules to be observed by the MDBs in their operations. The MDBs included in our report received unqualified or “clean” audit opinions on their financial statements from large, international public accounting firms for the 3 most recent fiscal years. MDBs prepare their financial statements to comply with different bases of accounting. Some MDBs present their financial statements using U.S. generally accepted accounting principles (GAAP), while others use International Accounting Standards or a combination of the two. Due to the special nature and organization of the concessional lending arms of the MDBs, some of these entities prepare special-purpose financial statements that are meant to show the sources and uses of resources and to comply with accounting standards specific to the affiliate’s operations. The concessional lending arms that prepare special-purpose financial statements include the International Development Association, the African Development Fund, and the Inter-American Development Bank’s Fund for Special Operations. Later sections of this report present ratios for each MDB that can be used to interpret financial information related to MDB operations. Where applicable, the financial ratios are presented for each MDB. Standard & Poor’s used many of these same ratios in its evaluation of several of these MDBs. Asset quality as it relates to the MDBs generally refers to the composition of the loan portfolio and is ultimately evidenced in the record of loan payments to the MDB. Asset quality reflects the creditworthiness of borrowers and is important for assessing credit risk. Asset quality also reflects the degree of concentration of risk, which considers the impact on the loan portfolio of potential default by the largest borrowers. The preferred creditor status attributed to MDBs generally improves the quality of the MDBs’ assets. Where applicable, the following financial information is presented to assess asset quality of the MDBs. Concentration of loans to the MDBs’ five largest borrowers or countries of operation indicates the impact that an economic downturn in a specific region or country could have on the loan portfolio. Nonaccrual loans as a percentage of total loans outstanding indicates the percentage of the MDB’s loan portfolio that is currently nonperforming. Loan loss allowance as a percentage of total loans outstanding indicates how much the MDB estimates it will lose due to nonperformance or default as a result of credit risk related to its loan portfolio. Loan loss allowance as a percentage of total nonaccrual loans indicates the sufficiency of the MDB’s allowances for estimated losses compared to its currently nonperforming portfolio. Capital quality as it relates to the MDBs refers to the composition of capital as well as the portion of callable capital from the more creditworthy members. Where applicable, the following financial information is presented to assess the capital quality of the MDBs. Paid-in capital as a percentage of total subscribed capital indicates the portion of subscribed capital that has been actually paid by members. AAA rated callable capital as a percent of total callable capital indicates the quality of capital that has been subscribed but not paid-in by members. The quality of callable capital is important as a gauge of the ability of members to meet a capital call in the unlikely event that an MDB cannot service its debt. Gearing ratios, as they relate to the MDBs, are measures of loans outstanding compared to the different types of capital available to the MDB. MDBs establish policies on gearing, which limit the ratio of outstanding loans to capital and reserves, to address the inherent risks of their lending activities. Where applicable, the following financial information is presented for the MDBs to use in assessing gearing. Net disbursed loans as a percentage of paid-in capital plus reserves indicates the ability to absorb borrower defaults without resorting to capital calls. Paid-in capital plus reserves is a measure of funds actually available to the MDB. Net disbursed loans as a percent of AAA callable capital plus paid-in capital and reserves indicates the ability of the MDB to absorb defaults. Leverage is a measure of the MDB’s outstanding debt compared to the different types of its capital. Similar to gearing ratios, leverage limitations are established by some of the MDBs to address the inherent risks of their activities. Where applicable, the following financial information is presented for the MDBs to use in assessing leverage. Outstanding debt as a percentage of paid-in capital plus reserves provides a comparison of an MDB’s outstanding debt with its paid-in capital plus reserves for funding its operations. Outstanding debt as a percentage of AAA callable capital plus paid- in capital and reserves indicates the MDB’s ability to meet its outstanding debts with the support of the most creditworthy countries. Liquidity is a measure of an MDB’s ability to pay its current debt service and to fund its lending operations on a timely basis. The MDBs’ assets from their lending operations are relatively illiquid. Therefore, MDBs must rely on sufficient liquid assets, such as investments in marketable securities, to ensure timely payment of debt service and disbursement of new loans. Where applicable, the following financial information is presented for the MDBs to use in assessing liquidity. Liquid assets as a percentage of undisbursed loans plus 1 year of debt service indicates how well MDBs are able to meet their current obligations. Administrative expenses ratio is a measure of administrative expenses compared to total expenses. Administrative expenses as a percentage of total expenses indicates the portion of total expenses attributable to administrative activities of the MDBs. Profitability ratio measures how well an entity performed during a given time period, usually a year. Although MDBs’ primary objective is to promote economic and social development and not to maximize profits, MDBs do seek to earn adequate income to cover their operational costs and to build adequate reserves. Net operating income as a percent of average assets is the rate of return on an MDB’s assets for the year and is a key measure of the MDB’s profitability. Because MDBs follow different accounting standards to prepare financial data, use different methodologies to estimate loan losses, and have different policies relating to nonaccrual loans, caution must be taken when comparing financial results. Further, MDBs serve different purposes and borrowers, which also affects the comparability of financial data. Credit ratings are useful for assessing the MDB itself as well as its borrowers and members. Credit ratings apply to the MDBs insofar as they borrow in the market place to finance their lending activities. The credit ratings of borrowing countries also apply to MDBs insofar as the creditworthiness of an MDB’s borrowers affects the MDB’s loan portfolios. A credit rating is intended to convey a current evaluation of a borrower’s overall creditworthiness to pay its financial obligations. The borrower’s capacity and willingness to pay its obligations are taken into account in establishing a rating. Table 3 summarizes Standard & Poor’s long-term credit ratings. Borrowers rated AAA and AA are considered to have a very strong capacity for meeting financial commitments. Borrowers rated A and BBB have a strong-to-adequate capacity to meet financial commitments but with some susceptibility to adverse changes in circumstances. Borrowers rated BB, B, CCC, and CC have uncertainties and vulnerabilities in their abilities to meet their financial obligations, with BB indicating the least degree of uncertainty and vulnerability within that range and CC, the highest. BB through CC borrowers will likely have some mitigating quality and protective characteristics; however, these may be outweighed by great uncertainties or major exposures to adverse conditions. Ratings from AA to CCC may be modified with a plus (+) or minus (-) sign to show the relative standing within the major rating categories. Despite the risks associated with the MDBs’ operations, many factors contribute to the generally high credit ratings several of the MDBs received from Standard & Poor’s. These factors include capital contributions and subscriptions from highly rated members, preferred creditor status attributed to obligations to the MDBs by borrowing members, and the MDBs’ overall conservative financial policies. Table 4 shows the credit ratings for six MDBs that primarily borrow from world capital markets to finance their lending activities. The Congress appropriates funds for U.S. contributions to the MDBs. The United States has provided a total of $40 billion to the MDBs included in this report. The largest cumulative U.S. contribution, $25.8 billion, has been made to the International Development Association. The United States has also subscribed to callable capital of $68.8 billion to the MDBs included in this report. The largest subscriptions of callable capital have been to the International Bank for Reconstruction and Development and the Inter-American Development Bank’s Ordinary Capital for $29.9 billion and $29 billion, respectively. For each of the MDBs included in our report, table 5 summarizes resources the United States has provided to the MDBs from inception through December 31, 1999, except for the MDBs affiliated with the World Bank Group, whose data are as of June 30, 2000. The United States is the largest member in most of these MDBs. Tables 6 and 7 provide a summary of U.S. appropriations for contributions and approved callable capital subscriptions to the MDBs for fiscal years 1996 through 2001. The Department of the Treasury oversees U.S. interests in the MDBs. The Secretary of the Treasury currently serves as the U.S. governor on each MDB board of governors. The United States has executive directors that are presidentially approved and confirmed by the U.S. Senate and that serve full-time on each MDB board of executive directors. Over time, the Congress has enacted many legislative policy mandates with respect to the MDBs. Many of the mandates direct the Secretary of the Treasury to instruct the U.S. executive directors to use their “voice” and “vote” to pursue certain U.S. policies. These mandates, addressing a variety of issues, specify what U.S. policy shall be in particular situations or how the U.S. executive directors shall vote on particular issues. Voting shares of the MDBs are allocated to member countries based primarily on capital subscriptions or contributions but may also be affected by requirements established in some of the regional MDBs’ articles that allow regional member countries to maintain a certain level of control over operations. Table 8 summarizes the U.S. voting percentages at each MDB. The following sections present more details on the MDBs and their related entities in which the United States has provided resources. These sections include summary information on the MDB background and mission, development activities, financing, and key financial data from the last 3 fiscal years. The World Bank Group’s members include developing and industrialized countries around the world. The group includes four institutions, or lending arms, which provide different functions in carrying out the its mission of fighting poverty and improving the standard of living in developing countries throughout the world. These four institutions, discussed in more detail in the remainder of this section, are as follows: The International Bank for Reconstruction and Development (IBRD)—the oldest and, based on total assets, the largest MDB—is the World Bank entity that provides market-based loans, guarantees, and technical assistance to middle-income countries and more creditworthy poorer countries. The International Development Association is the World Bank’s concessional lending arm and provides support for the bank’s poorer member countries. The International Finance Corporation provides loans, equity investments, and technical assistance for private sector enterprises. The Multilateral Investment Guaranty Agency provides guarantees to foreign investors against loss caused by noncommercial risks, as well as technical assistance to host governments. IBRD is a member of the World Bank Group and was established in 1945. The principal stated purpose of IBRD is to reduce poverty by promoting sustainable economic development. IBRD seeks to achieve this goal by providing loans, guarantees, and technical assistance for projects and programs of economic reform to its developing member countries, which are primarily middle-income and creditworthy poorer countries not limited to a specific region of the world. As reported in its fiscal year 2000 financial statements, IBRD had 181 members, of which 98 were borrowing countries. IBRD does not operate to maximize profits but seeks to earn adequate net income to ensure its financial strength and to support its development activities. The primary vehicle of IBRD development assistance is direct lending. All of IBRD’s loans are made to or guaranteed by members, except for loans to the World Bank Group’s International Finance Corporation. IBRD offers several different loan products in a variety of currencies to meet the needs of its borrowers. In general, IBRD charges interest rates that are based on either IBRD’s average cost of borrowing plus a spread or the London Interbank Offer Rates (LIBOR) plus a spread. Repayment periods and grace periods depend on the type of loan or the borrower’s repayment abilities. The choice in financial terms is intended to provide borrowers with the flexibility to select terms that are both compatible with their debt management strategy and suited to their debt servicing capability. As of June 30, 2000, IBRD’s loans outstanding totaled approximately $120 billion, including $2.0 billion in nonaccrual status. As shown in table 9, nearly 42 percent of this outstanding balance is from IBRD’s five largest borrowing countries. In fiscal year 2000, IBRD approved loans of about $10.9 billion to developing countries for projects and programs in various sectors. During fiscal year 2000, human development projects and programs were among IBRD’s lending priorities. These efforts are aimed at education, health, nutrition, and social protection. Strengthening the financial sector and improving public sector management and infrastructure needs, including transportation, telecommunications, and water supplies, were also a focus of fiscal year 2000 approvals. Development efforts in these areas are aimed at attracting private sector investment and poverty reduction in developing countries. In fiscal year 2000, Turkey received the greatest amount of new IBRD commitments, with approximately $1.8 billion in support of structural and social reforms, including over $750 million in response to a severe earthquake in the region. IBRD’s operations are financed through retained earnings, paid-in capital, and borrowings obtained from world capital markets using callable capital as backing. As of June 30, 2000, AAA-rated countries accounted for 44 percent of IBRD’s total callable capital. Based on its strong membership support and preferred creditor status, among other factors, IBRD received a AAA credit rating with a stable outlook for the future from Standard & Poor’s during September 2000. The quality of IBRD’s callable capital and its AAA credit rating allow the bank to borrow funds from world capital markets at favorable interest rates for long loan terms. This results in IBRD’s ability to pass on more favorable lending terms to borrowers than would normally be available to them based on their own credit standing. IBRD’s financial statements are prepared in accordance with U.S. GAAP and International Accounting Standards. IBRD received an unqualified audit opinion on its financial statements from Deloitte Touche Tohmatsu for fiscal years 1998 through 2000. Table 10 summarizes key financial data related to IBRD’s results of operations over the past 3 fiscal years. The International Development Association (IDA) was established in 1960 and is the concessional lending arm of the World Bank Group. IDA primarily supports poverty reduction by providing interest-free loans, called “credits,” to the poorest developing countries throughout the world. IDA also provides loan guarantees and technical assistance. In order to qualify for IDA lending, a country’s per capita income in 1999 had to be equivalent to less than U.S. $885, and the country had to have only limited or no creditworthiness for IBRD lending. IDA’s concessional lending is targeted to building the human capital, policies, institutions, and physical infrastructure needed to bring about equitable and sustainable growth. Specifically, many development projects address basic human needs, such as primary education, health services, clean water, and sanitation. IDA also funds projects that protect the environment, improve conditions for private business, build infrastructure, and support reforms that are aimed at liberalizing countries’ economies. IDA credits generally have maturities of 35 or 40 years and offer 10-year grace periods on repayment of principal. There is no interest charge, but credits do carry a small service charge, currently 75 basis points of disbursed balances. As of June 30, 2000, IDA’s outstanding credits totaled approximately $86 billion, including $4.2 billion of credits in nonaccrual status. As shown in table 11, nearly 48 percent of IDA’s total credits outstanding were due from IDA’s five largest borrowing countries. During fiscal year 2000, IDA approved approximately $4.4 billion in new credits to 52 countries. Nearly half of this new lending went to countries in Africa. The human development sector—which includes education, health and nutrition, and social protection—received approximately 38 percent of IDA’s lending during this period. India was the largest recipient of IDA credit approvals during fiscal year 2000, with IDA providing nearly $867 million in support of various projects. The next largest borrowers, Tanzania and Vietnam, received approximately $330 million and $286 million, respectively, in support of structural and social reform. IDA’s concessional lending is financed primarily through developed member country contributions, repayments of outstanding credits by borrowers, services charges, and investment income. Since IDA does not charge interest to its borrowers, periodic contributions called replenishments are needed for IDA to continue its lending operations. These replenishments generally cover a 3-year period. In 1998, member countries agreed to the twelfth replenishment, which would allow lending of approximately $20.5 billion for fiscal years 2000 through fiscal year 2002. This replenishment included $11.6 billion of member country contributions, a $0.9 billion contribution from IBRD’s net income, and $8.0 billion from IDA’s own funds, consisting of borrower repayments and investment income. Due to the special nature and organization of IDA, financial statements are prepared for the specific purpose of reflecting the sources and uses of member contributions and other development resources. IDA’s special- purpose financial statements are prepared to comply with procedures set forth in its Articles of Agreement and agreed upon by members. Under IDA’s special accounting procedures, management has elected to present loans at the full face value and does not estimate a loan loss allowance related to its loan portfolio. The financial statements are not meant to comply with U.S. GAAP or International Accounting Standards. IDA received an unqualified audit opinion on its financial statements from Deloitte Touche Tohmatsu for fiscal years 1998 through 2000. Table 12 summarizes some of the key financial data concerning IDA’s financial position over the past 3 fiscal years. The International Finance Corporation (IFC) is a member of the World Bank Group and was established in 1956 to further economic growth in its developing member countries by promoting private sector development. IFC’s primary objective is to provide loans and equity investments to private sector enterprises where sufficient private capital is not otherwise available on reasonable terms. Unlike most MDBs, IFC loans are not guaranteed by benefiting countries. In addition, IFC provides technical assistance and financial advice to businesses and governments. Membership in IFC is open only to those countries that are members of the World Bank. IFC had 174 members at the end of fiscal year 2000. According to IFC’s Articles of Agreement, investments are to be made in productive private enterprises. To be eligible for IFC financing, projects must meet profitability and project viability criteria, benefit the economy of the host country, and comply with stringent developmental impact requirements. IFC’s main investment activity is making loans for private entrepreneurial projects, which may involve expansions and modernization efforts. Its lending activities include cofinancing, loan syndication, underwriting, and guarantees, which act as a catalyst for additional project funding from other lenders and investors. IFC also makes equity investments, typically through the purchase of common or preferred stock. As of June 30, 2000, IFC’s outstanding loan and equity investment portfolio was $10.9 billion, an increase of 9 percent over the previous year’s portfolio of $10 billion. Loans account for the major part of the financing provided by IFC, representing $8.3 billion or about 76 percent of the outstanding portfolio, while equity investments of $2.6 billion or about 24 percent of its outstanding portfolio were held by IFC as of June 30, 2000. IFC provides loans and equity investments for a wide range of sectors to promote development in its member countries located throughout the world. The two largest sectors are financial services and infrastructure, which accounted for $4.7 billion or 43 percent of its $10.9 billion loan and equity investment portfolio. Other sectors include mining, agribusiness, manufacturing, chemicals, timber, textile, and tourism. As of June 30, 2000, approximately $4.3 billion or 39 percent of IFC’s outstanding loan and equity investment portfolio related to development in the Latin America and Caribbean region. This region has received $4.3 billion of $10.9 billion or 39 percent of IFC’s loan and equity investment portfolio. As shown in table 13, 40 percent of IFC’s cumulative financing has been provided to five countries. Since IFC loans are made to the private sector and are not guaranteed by a member, the risk related to individual projects is more meaningful than the credit rating of the member country in which the project is located. IFC raises most of the funds for its lending and equity investment activities by issuing notes, bonds, and debt securities in the international capital markets. IFC may borrow in the public markets of a member country only with approvals from that member. In fiscal year 2000, IFC borrowed $4.4 billion. As of June 30, 2000, IFC’s total outstanding debt was $14.9 billion. Also, IFC finances its operations through borrowing from the World Bank’s IBRD, paid-in capital, and retained earnings. As of fiscal year 2000, IFC had $2.4 billion of subscribed paid-in capital from its member countries. IFC’s financial statements are prepared in accordance with U.S. GAAP and with International Accounting Standards. IFC received an unqualified audit opinion on its financial statements from Deloitte Touche Tohmatsu for fiscal years 1998, 1999, and 2000. Table 14 summarizes key financial data related to IFC’s results of operations over the past 3 fiscal years. The Multilateral Investment Guaranty Agency (MIGA) is a member of the World Bank Group and was established in 1988. Unlike the other entities of the World Bank, MIGA does not provide loans to member governments or private enterprises. Instead, MIGA provides investment guarantees and insurance to foreign investors. The guarantees and insurance are intended to stimulate foreign investment in developing countries by private investors and commercially operated public sector companies. MIGA also provides technical assistance to host governments advising on ways to enhance their ability to attract foreign direct investment. MIGA’s operations are structured to supplement the activities of the other institutions of the World Bank Group. MIGA had 152 member countries at June 30, 2000. To meet its objectives of promoting economic growth and development, MIGA provides investment guarantees for up to 20 years against the political risks of: (1) transfer restrictions, (2) expropriation, (3) breach of contract, and (4) war and civil disturbances in the host country. Investments eligible for MIGA guarantees include equity, loans, and loan guarantees, provided that contractual commitments have terms of at least 3 years. Generally, guarantees can be made for up to 90 percent of the investment contribution, plus additional amounts to cover earnings or interest. MIGA seeks to guarantee those investment projects that contribute to the host country’s needs and are also financially, economically, and environmentally sound. MIGA obtains reinsurance to augment its underwriting capacity and to protect portions of its insurance portfolio. The difference between MIGA’s total guarantees outstanding and the portion of its portfolio covered by reinsurance is MIGA’s net exposure. Although MIGA remains liable for the entire guaranteed amount, the reinsurance provides MIGA with a source of recovery in the event that it must cover an insured incident. MIGA’s outstanding investment guarantees at the end of fiscal year 2000 were $4.4 billion. During fiscal year 2000, outstanding guarantees increased by 19 percent from $3.7 billion to $4.4 billion. MIGA’s guarantee investment portfolio is diversified across several sectors. The financial sector accounts for $1.5 billion, or 34 percent, followed by the infrastructure sector, which accounts for $1.3 billion, or 29 percent of the portfolio. Investment guarantees are made throughout the developing regions in the world. The Latin America and Caribbean region accounts for a significant portion of the outstanding portfolio with $2.2 billion of investment guarantees, or 51 percent of the portfolio. The majority of the guarantees were granted to investors from the Netherlands, the United States, and the United Kingdom, and they account for 20.5 percent, 19.7 percent, and 15.6 percent, respectively. MIGA finances its operations through member country subscriptions, which initially were 20 percent to paid-in capital and the remaining 80 percent to callable capital. In March 1999, MIGA’s board of governorsapproved an increase in capital resources to $2 billion, with the subscription period ending March 28, 2002. As of June 30, 2000, member countries have provided subscriptions totaling $1.27 billion, including $1 billion in callable capital. In addition to member subscriptions, MIGA earns income from premiums and fees for its guarantees and from its investments. In fiscal year 2000, MIGA had income from its guarantees of $29.5 million and investment income of $23.5 million. MIGA’s financial statements are prepared in accordance with U.S. GAAP and International Accounting Standards. MIGA received an unqualified audit opinion on its financial statements from Deloitte Touche Tohmatsu for fiscal years 1998, 1999, and 2000. Table 16 summarizes key financial data related to MIGA’s results of operations over the past 3 fiscal years. The African Development Bank Group includes the African Development Bank (AfDB) and the African Development Fund (AfDF) which serve the development needs of Africa. AfDB provides market-based loans and other development assistance to the public and private sector. As of December 31, 1999, AfDB had outstanding loans of $9.3 billion. AfDF provides concessional loans, which provide key support to the bank’s poorest members. As of December 31, 1999, AfDF had outstanding loans of $7.7 billion. AfDB is a regional MDB established in 1964. The principal stated purpose of AfDB is to promote sustainable economic growth and reduce poverty in Africa. In this effort, AfDB targets agriculture, rural development, human resources development, and private sector development. AfDB has 77 member countries, including 53 regional members, which account for approximately 63 percent of the bank’s total subscribed capital. In its 1999 annual report, AfDB reported that only 13 of its borrowing member countries were eligible for AfDB resources, while 39 borrowing member countries were eligible solely for concessional assistance through AfDF. AfDB provides development assistance through market-based loans, loan guarantees, equity investments, cofinancing, and technical assistance. Except for private sector development loans, all of the bank’s loans are made to or guaranteed by member countries. AfDB currently offers several types of loan products available in a variety of currencies to meet the needs of its borrowers. Interest rates are primarily based on the bank’s cost of borrowing plus a spread. Loan repayment periods and grace periods vary according to the borrowers. As of December 31, 1999, AfDB had approximately $9.3 billion of outstanding loans, including $1.3 billion that were in nonaccrual status. As shown in table 17, approximately 60 percent of the bank’s total loans outstanding are to its five largest borrowers. As AfDB reported in its annual report, during 1999, it approved about $1.1 billion of new lending and equity investments, of which approximately $920 million, or 86 percent were loans to or guaranteed by member countries. The remaining approvals were for private sector loans, HIPC debt relief, equity investments, and emergency operations. The majority of the new loans to or guaranteed by member countries related to the agriculture sector and to policy and structural reforms to encourage an environment conducive to sustainable growth. For example, about $141 million was approved for Tunisia to improve efficiency in resource allocation for a competitive economy, and Zimbabwe received about $130 million to support government reform efforts to reallocate public expenditures to education, health, and poverty reduction activities. Also during 1999, AfDB contributed about $376 million for cofinancing operations with official agencies and private financial institutions to promote the flow of resources to its borrowing member countries. AfDF provided an additional $120 million toward these cofinancing operations. External sources provided an additional $1.8 billion to member country borrowers as a result of the bank group’s cofinancing activities during 1999. These ventures related primarily to policy based programs, debt relief, poverty reduction, and the transportation sector. The World Bank was the largest cofinancing partner during 1999. AfDB’s operations are financed through retained earnings, paid-in capital from members, and funds borrowed from world capital markets using callable capital as backing. As of December 31, 1999, AAA rated countries accounted for approximately 25.5 percent of AfDB’s total callable capital. The major goal of AfDB’s borrowing strategy is to minimize the cost of its funding, which is passed on to its borrowers, and to maximize the development impact of its operations. In 1999, the fifth capital increase intended to provide 35 percent more capital to AfDB became effective. As of December 31, 1999, 26 of AfDB’s 77 member countries had deposited their subscriptions. In September 2000, based on conservative borrowing policies and an increased capital base, AfDB received a AA+ credit rating with a negative outlook for the future from Standard & Poor’s because of the continual deterioration in the asset quality of its loan portfolio. AfDB’s financial statements are prepared in accordance with International Accounting Standards. AfDB received unqualified audit opinions on its financial statements from Deloitte & Touche LLP for 1997, 1998, and 1999. Table 18 summarizes key financial data related to AfDB’s results of operations over the past 3 years. AfDF is the concessional lending arm of the AfDB Group and was established in 1973. AfDF provides loans on concessional terms for projects and programs, as well as technical assistance to the bank group’s low-income regional borrowing member countries that do not qualify for lending from AfDB. As of December 1999, 75 percent of the bank group’s regional member countries were solely eligible for the Fund’s concessional lending due to their credit standing, except for limited AfDB lending available for projects with the private sector and distinct territories within these member countries. AfDF offers loans with very favorable loan terms, including no interest charges and extended repayment and grace periods. The fund does charge minimal service charges on outstanding loan balances and undisbursed commitments. As of December 31, 1999, AfDF’s outstanding loans were approximately $7.7 billion, including $892 million in nonaccrual status. As shown in table 19, nearly 28 percent of the Fund’s total loans outstanding were to AfDF’s five largest borrowers. During 1999, AfDF approved about $630 million of new financing to the bank group’s borrowing member countries. Approximately $518 million or 82 percent related to loans or lines of credit. The remainder related to technical assistance grants, HIPC debt relief, and other debt alleviation. The majority of the new loan approvals related to the agriculture sector, social sector, and multisector. Agriculture-related lending focused on improving output and food security, enhancing rural incomes, and reducing poverty. Social sector lending focused on improving access and quality of education and health services. Multisector lending included improving access to financial services, especially for women, and building the institutional and income-generating capabilities of target populations. To continue lending operations, AfDF relies on net income, borrower repayments, and contributions from 27 members, including AfDB, which has contributed nearly 12 percent of AfDF’s total contributions and maintains 50 percent of the voting shares. The eighth replenishment of the Fund was approved by the Board of Governors and became effective in December 1999. This replenishment authorized subscriptions for contributions of approximately $3.0 billion, which would enable AfDF to continue concessional lending from 1999 through 2001. As of December 31, 1999, $1.6 billion of the total contribution subscriptions authorized in the eighth replenishment were contributed. Because of the special nature and organization of AfDF, financial statements are currently prepared for the specific purpose of reflecting the net development resources of the Fund. Under AfDF’s special accounting basis, outstanding loans are not included in development resources available, and, accordingly, no allowance for possible loan losses is recorded. AfDF’s special-purpose financial statements are prepared to comply with procedures set forth in its Articles of Agreement establishing the Fund and agreed upon by members. The financial statements are not meant to comply with International Accounting Standards. AfDF received an unqualified audit opinion on its financial statements from Deloitte & Touche LLP for 1997 through 1999. Table 20 summarizes some of the key financial data of AfDF’s financial position over the past 3 years. The Asian Development Bank (AsDB) group includes Ordinary Capital Resources (OCR) and the Asian Development Fund (AsDF), which serve the development needs of the Asian and Pacific regions. OCR provides market-based loans, equity investment, guarantees, and indirectly provides technical assistance to middle-income countries and creditworthy poorer countries. As of December 31, 1999, OCR had outstanding loans of $28.3 billion. AsDF provides concessional loans and is key to supporting the bank’s poverty reduction mission. As of December 31, 1999, AsDF had outstanding loans of $16.0 billion. AsDB is a regional bank established in 1966. As reported in its 1999 annual report, AsDB had 58 member countries, including 42 regional members, which had contributed approximately 64 percent of OCR’s total subscribed capital. OCR’s principal stated purpose is to reduce poverty in its Asian and Pacific region borrowing member countries through (1) economic growth projects and programs to facilitate employment and income generation for the poor, (2) social development programs to improve the standard of living for the poor, and (3) good governance to ensure that the poor have access to basic services. OCR also pursues activities to foster economic growth, support human development, improve the status of women, protect the environment, and encourage private sector development activities, which also serve the overall goal of reducing poverty. OCR provides development assistance through market-based loans, loan guarantees, and cofinancing with the public and private sectors. OCR also finances equity investments in private enterprises and indirectly provides technical assistance, which is provided primarily through special funds operated by AsDB. Except for private sector loans, all of the bank’s loans are made to or guaranteed by borrowing member countries. AsDB offers several types of loan products available in a variety of currencies to meet the needs of its borrowers through its OCR. Interest rates are primarily based on the bank’s cost of borrowing plus a spread, and repayment terms range from 4 to 30 years. As of December 31, 1999, OCR had $28.3 billion in outstanding loans, of which nearly 99 percent was loaned to or guaranteed by borrowing members and less than $73 million was in a nonaccrual status. As shown in table 21, approximately 79 percent of the total loans outstanding balance were due from the bank’s five largest borrowers. During 1999, OCR approved about $3.9 billion in loans to private and public borrowers, including $2.5 billion related to cofinancing activities. The bank’s concessional lending Asian Development Fund provided an additional $699 million in cofinancing activities. Combined, these cofinancing activities attracted an additional $3.0 billion from external sources to the bank’s borrowing member countries. A significant portion of the bank’s loans served the social infrastructure sector with projects relating to water supply and sanitation, education, health, housing, and other urban infrastructure facilities. The other sectors that received significant assistance related to energy, transportation, and communication. During 1999, the People’s Republic of China was the largest borrower with about $1.2 billion in approved loans in the energy, social infrastructure, and transportation and communication sectors. Indonesia was the second largest borrower with loan approvals of about $1.0 billion related primarily to energy and health. Combined, these two countries accounted for approximately 58 percent of OCR’s loan approvals during 1999. OCR’s operations are financed through retained earnings, paid-in capital, and funds borrowed from world capital markets using callable capital as backing. As of December 31, 1999, AAA rated countries accounted for about 43 percent of OCR’s total callable capital. AsDB’s last general capital increase became effective in 1994. Based on AsDB’s strong financial profile, conservative financial policies, and strong member support, AsDB received a AAA credit rating with a stable outlook for the future from Standard & Poor’s in September 2000. The quality of OCR’s callable capital and its AAA credit rating allow the bank to borrow funds from world capital markets at favorable interest rates and loan terms. This results in the bank’s ability to pass on to borrowers more favorable lending terms than would normally be available based on their own credit standing. AsDB’s OCR financial statements are prepared in accordance with U.S. GAAP. OCR received unqualified audit opinions on its financial statements from PricewaterhouseCoopers for 1997 through 1999. Table 22 summarizes key financial data related to the results of operations for AsDB’s OCR over the past 3 years. AsDF was established in 1974 and is the concessional lending arm of AsDB that provides loans to the bank’s least developed borrowing member countries in the Asian and Pacific region. These are members which have low per capita gross national product and limited debt repayment capacity. AsDF supports activities that promote poverty reduction and improve the quality of life for the poor. AsDF provides concessional loans to the bank’s members at favorable interest rates and loan terms. Interest rates are 1.5 percent and repayment periods range from 24 to 32 years, including an 8-year grace period. The Fund requires borrowers to absorb exchange risks attributable to fluctuations in the value of the currencies disbursed. As of December 31, 1999, AsDF had approximately $16.0 billion of outstanding loans, including $536 million in nonaccrual status. Approximately 76 percent of the Fund’s balance of outstanding loans was due from its five largest borrowers. During 1999, AsDF approved $1.1 billion in new loans, including $699 million of cofinancing lending, which primarily related to development in the transportation, communication, and social infrastructure sectors. During this period, Bangladesh was the largest borrower with about $250 million in loans approved in the energy, social infrastructure, and transportation and communication sectors. Vietnam was the second largest borrower with loan approvals for about $155 million primarily related to social infrastructure and industry sectors. Combined, these two countries accounted for approximately 38 percent of AsDF’s loan approvals during 1999. AsDF finances its operations with retained earnings, investment income, and contributions from 26 member countries, both regional and nonregional. AsDF’s resources have been augmented by seven replenishments. The Board of Governors authorized the seventh replenishment in 1997. Under the replenishment, contributions were scheduled to become available to AsDF in four equal installments during 1997 through 2000. In 2000, the Board of Governors approved the eighth replenishment of AsDF resources. AsDF’s financial statements are prepared in accordance with U.S. GAAP. AsDF received an unqualified audit opinion on its financial statements from PricewaterhouseCoopers for 1997 through 1999. Table 24 summarizes key financial data related to AsDF’s financial position over the past 3 years. The Inter-American Development Bank (IDB) is the oldest and, based on total assets, largest regional MDB. It was established in 1959 to help accelerate economic and social development in Latin America and the Caribbean. Current lending priorities include poverty reduction; social equity; improving the efficiency, transparency, and accountability of the public sector; economic integration, including trade agreements; and the environment. IDB consists of the following entities as well as several funds in administration, which provide financing and technical assistance, and the Intermediate Financing Facility Account: The Inter-American Development Bank’s Ordinary Capital (OC) provides market-based loans, guarantees, and technical assistance to the public and private sector. The Inter-American Development Bank’s Fund for Special Operations is the concessional lending arm that provides technical assistance and key support for the bank’s poorer members. The Inter-American Investment Corporation is the Inter-American Development Group’s entity that provides loans and equity, and technical assistance to small and midsize private enterprises. During 1999, IDB approved $9.5 billion in new loans and loan guarantees for the benefit of borrowing regional member countries primarily through the Ordinary Capital and the Fund for Special Operations. Approximately $4.3 billion of these new lending approvals were for the social sector and related to improving economic opportunities for the poor, strengthening the social infrastructure, such as education and health, and promoting equitable access to social services. Most of the operations of IDB are conducted through OC. The operations and resources of OC are maintained separately from those of IDB’s other entities and various funds. The members of the bank include 28 regional and 18 nonregional countries. Developing regional members had subscribed approximately 50 percent of OC’s total subscribed capital at December 31, 1999. OC provides development assistance in the form of loans made to or guaranteed by members as well as loans and loan guarantees to private sector enterprises located within its regional borrowing member countries. Loans to the private sector without a member’s guarantee are limited to 5 percent of the bank’s outstanding loans and guarantees. The primary vehicle of OC’s development assistance is direct lending. OC offers several different loan products that are disbursed in a variety of currencies. In general, interest rates charged on these loans are based on the bank’s cost of borrowing or on LIBOR, plus an interest rate spread, which is currently 50 basis points. Repayment periods range from 15 to 30 years. As of December 31, 1999, OC’s loans outstanding totaled approximately $38.6 billion and were all fully performing. IDB has never had a write-off of any of its OC loans. During 1999, OC approved $9.1 billion in loans and loan guarantees for its borrowing regional member countries, of which approximately 97 percent were loans. Brazil received about $4.8 billion, or 53 percent of OC’s approvals during 1999. Lending to Brazil included a $2.2 billion loan for social sector reform and a social protection program and a $1.2 billion loan to further develop small and medium-size productive sectors by making more market-rate financing available. Columbia and Mexico were the next largest borrowers in 1999 with about $1.0 billion and $919 million, respectively. As shown in table 25, nearly 69 percent of OC’s total loans outstanding were due from its five largest borrowing countries. OC’s operations are financed through retained earnings, paid-in capital, and borrowings obtained from world capital markets using callable capital as backing. As of December 31, 1999, AAA rated countries accounted for about 41 percent of OC’s total callable capital. Based on membership support, preferred creditor status, and strong franchise value as a result of its lending expertise, OC received a AAA credit rating with a stable outlook for the future from Standard & Poor’s during September 2000. The quality of OC’s callable capital and its strong credit rating allow the bank to borrow funds from world capital markets at favorable loan terms. This allows the bank to pass on to borrowers more favorable lending terms than would normally be available based on their own credit standing. OC’s most recent and eighth capital increase, approved by IDB’s Board of Governors in 1995, increased the bank’s resources by $40 billion, bringing total capital to about $101 billion. OC financial statements are prepared in accordance with U.S. GAAP. OC received an unqualified audit opinion on its financial statements from Arthur Andersen, LLP for 1997 through 1999. Table 26 summarizes key financial data related to OC’s results of operations over the past 3 years. The Fund for Special Operations (FSO) of IDB provides loans on concessional terms to the bank’s borrowing regional member countries that are classified as economically less developed. FSO also provides technical assistance to borrowing countries. FSO makes low-interest and longer-maturity loans for countries in the region that require such financing. Concessional loans of FSO are made to or guaranteed by borrowing regional member countries. The rate of interest and other loan terms of FSO loans depend on the type of currency that is lent to the borrower, the stage of the country’s development, and the nature of the project. Generally, these loans charge interest rates from 1 to 4 percent, offer grace periods of 5 to 10 years, and have maturities of 25 to 40 years. As of December 31, 1999, FSO’s loans outstanding totaled approximately $7.0 billion and were all fully performing. FSO has never had a write-off except for debt relief resulting from the implementation of the HIPC Initiative. As shown in table 27, nearly 51 percent of FSO’s total loans outstanding was due from its five largest borrowing countries. During 1999, FSO approved about $417 million in loans to Bolivia, Guyana, Honduras, and Nicaragua. FSO operations are financed primarily through borrower repayments, investment income, and contributions from 46 member countries. FSO does not borrow from the world market to obtain additional funding for its operations. The most recent and eighth increase in contributions was effective in 1995. This increase in contributions increased the authorized resources of FSO by approximately $1.0 billion, bringing total contributions to $9.6 billion. Because of the special nature and organization of the FSO, its financial statements are prepared on a special accounting basis that comply with procedures set forth in IDB’s Articles of Agreement and agreed upon by members. This special accounting basis is not meant to be consistent with U.S. GAAP. Under FSO’s special accounting basis, management has elected to present loans at the full face value and does not estimate a loan loss allowance related to its loan portfolio. FSO received unqualified audit opinions on its financial statements from Arthur Andersen, LLP, for 1997 through 1999. Table 28 summarizes key financial data related to FSO’s financial position over the past 3 years. The Inter-American Investment Corporation (IIC), a multilateral organization, was established in 1986. Although a member of the IDB Group, IIC is autonomous, and its resources and management are separate from those of the IDB. IIC’s mission is to promote the economic development of its Latin American and Caribbean member countries by financing small and medium-size enterprises. IIC has 37 member countries that include 26 Latin American and Caribbean countries, 8 European countries, and Israel, Japan, and the United States. Its development investment activities are limited to its 26 regional developing member countries. IIC provides project financing in the form of loans and equity investments, lines of credit to local intermediaries, and investments in local and regional investment funds. IIC seeks to make loan and equity investments where sufficient private capital is difficult to obtain or otherwise not available on reasonable terms. Project funding is supplemented by other investors and lenders through cofinancing or loan syndication. IIC provides loan amounts up to 33 percent of the cost of a new project and up to 50 percent of the cost of an expansion project. IIC makes equity investments of up to 33 percent of a company’s capital. In addition, the IIC provides financial and technical advisory services as part of its evaluation of the project’s soundness and probability of success. During 1999, IIC approved $192 million in loans and equity investments. Loans accounted for 89 percent of IIC approvals during the year. Projects supporting the financial services sector received approximately 51 percent of these new approvals. Regional projects and projects in Bolivia received the largest portions of IIC approvals with approximately 22 percent and 10 percent, respectively. As of December 31, 1999, IIC had approximately $353 million in loans and equity investments outstanding. As shown in table 29, nearly 61 percent of IIC’s total outstanding loans and equity investments related to four of its largest countries of operations, as well as projects covering the entire region. IIC receives its financing primarily from capital subscriptions from its member countries. IIC does not have callable capital to finance its development activities. On December 14, 1999, the Board of Governors approved a resolution increasing the authorized capital of IIC from $203.7 million to $703.7 million. The resolution called for $500 million for subscriptions by member countries as of February 28, 2000. Member country subscriptions are based on their voting shares and are paid in several installments, the last being payable on or before October 31, 2007. IIC’s financial statements are prepared in accordance with U.S. GAAP. IIC has received unqualified audit opinions on its financial statements from PricewaterhouseCoopers LLP for 1997 through 1999. Table 30 summarizes key financial data related to IIC’s results of operations over the past 3 years. The European Bank for Reconstruction and Development (EBRD) is a regional bank established in 1990. As of December 31, 1999, EBRD’s 60 members consisted of 58 sovereign countries, including 26 countries of operation that have contributed about 12 percent of EBRD’s total subscribed capital, the European Community, and the European Investment Bank. EBRD’s principal stated purpose is to foster the transition toward open market-oriented economies and to promote private and entrepreneurial initiatives in the countries of central and eastern Europe and the Commonwealth of Independent States that are committed to applying principles of multiparty democracy, pluralism, and market economies. EBRD provides development assistance through public and private loans, cofinancing, loan guarantees, share investments, and technical assistance. Through its operations, it promotes private sector activity, the strengthening of financial institutions and legal systems, and the development of infrastructure needed to support the private sector. The majority of the bank’s private sector lending and investments do not include a sovereign guarantee by a member. As of December 31, 1999, EBRD had $7.0 billion in loans outstanding, including $455 million in a nonaccrual status. As shown in table 31, approximately 57 percent of total loans outstanding related to EBRD’s five largest countries of operations. During 1999, EBRD approved about $2.2 billion for its financing operations. Approximately 71 percent of these approvals related to private sector loans or equity investments. The majority of these approvals related to financial institutions, industry and commerce, and the infrastructure sectors. EBRD has always placed emphasis on the financial institution sector, recognizing that a well-functioning market economy requires a sound and effective financial sector capable of commanding the confidence of a country’s population. The industry and commerce sector includes projects related to agriculture, natural resources, tourism, and telecommunications. The infrastructure sector relates to power and energy utilities, transportation, and municipal and environmental infrastructure. During 1999, regional development projects and projects within the Ukraine and Czech Republic received the most of the bank’s new approvals. EBRD’s operations are financed through retained earnings, paid-in capital from members, and funds borrowed from world capital markets using callable capital as backing. As of December 31, 1999, members with a AAA credit rating accounted for about 60 percent of EBRD’s total callable capital. EBRD’s most recent capital increase to EBRD was approved in 1996. As of December 31, 1999, 56 out of 60 members had participated in the increase and brought the total amount subscribed to about 97 percent of the approved total capital increase. Based on EBRD’s strong membership support; prudent policy limits on the bank’s operations, gearing, and liquidity; and strengthening of the organization, EBRD received a AAA credit rating with a stable outlook for the future from Standard & Poor’s in September 2000. EBRD’s financial statements are prepared to comply with International Accounting Standards, and EBRD received an unqualified audit opinion on its financial statements from Arthur Andersen for 1997 through 1999. Table 32 summarizes key financial data related to EBRD’s results of operations over the past 3 years. On May 11, 2001, we received comments of a technical nature from cognizant Treasury officials. We have incorporated these technical comments as appropriate. We are sending copies of this report to the Honorable Paul H. O’Neill, Secretary of the Treasury, and other interested parties. Copies will be made available to others upon request. Please contact Jeanette Franzel, Acting Director, at (202) 512-9471 or by email at franzelj@gao.gov if you or your staffs have any questions concerning this report. Key contributors to this report were Darryl Chang, Marcia Carlsen, Julia Ziegler, and Meg Mills.
This report discusses Multilateral Development Banks (MDB), which provide financial support for projects and programs that promote social and economic progress in developing countries. GAO provides (1) summaries of each bank's mission, function, and operations; (2) key bank financial data covering the last three fiscal years; and (3) information on the U.S. investment in capital and voting percentages in each MDB. GAO found that MDBs are autonomous international financial entities that finance economic and social development projects and programs in developing countries. The MDBs primarily fund these projects and programs using money borrowed from world capital markets or money provided by the governments of member countries. MDBs enable developing countries to access foreign currency resources on more advantageous terms than would be available to them on the basis of their own international credit standing. The MDBs provide assistance in the form of loans, equity investments, loan and equity guarantees, and technical assistance. Direct lending is the primary vehicle of development assistance. The United States is the largest member in most of the MDBs discussed in this report, contributing significant amounts to support the missions of the MDBs and subscribing a significant amount to MDBs' callable capital.
In March 1997, a White House memorandum implemented adjudicative guidelines, temporary eligibility standards, and investigative standards governmentwide. The National Security Council is responsible for overseeing these guidelines and standards. Within DOD, the Office of the Under Secretary of Defense for Intelligence (OUSD ) is responsible for coordinating and implementing DOD-wide policies related to access to classified information. Within OUSD (I), the Defense Security Service (DSS) is responsible for conducting background investigations and administering the personnel security investigations program for DOD and 24 other federal agencies that allow industry personnel access to classified information. DSS’s Defense Industrial Security Clearance Office (DISCO) adjudicates cases that contain only favorable information or minor security issues. The Defense Office of Hearings and Appeals (DOHA) within DOD’s Office of General Counsel adjudicates cases that contain more serious security issues. As with military members and federal workers, industry personnel must obtain a security clearance to gain access to classified information, which is categorized into three levels: top secret, secret, and confidential. Individuals who need access to classified information over a long period are required to periodically renew their clearance (a reinvestigation). The time frames for reinvestigations are 5 years for top secret clearances, 10 years for secret clearances, and 15 years for confidential clearances. To ensure the trustworthiness, judgment, and reliability of contractor personnel in positions requiring access to classified information, DOD relies on a three-stage personnel security clearance process that includes (1) determining that the position requires a clearance and, if so, submitting a request for a clearance to DSS, (2) conducting an initial investigation or reinvestigation, and (3) using the investigative report to determine eligibility for access to classified information—a procedure known as “adjudication.” Figure 1 depicts this three-stage process and the federal government offices that have the lead responsibility for each stage. In the preinvestigation stage, the industrial contractor must determine that a position requires the employee to have access to classified information. If a clearance is needed, the industry employee completes a personnel security questionnaire, and the industrial contractor submits it to DSS. All industry requests for a DOD-issued clearance are submitted to DSS while requests for military members and federal employees are submitted to either DSS or the Office of Personnel Management (OPM). In the investigation stage, DSS, OPM, or one of their contractors conducts the actual investigation of the industry employee by using standards established governmentwide in 1997 and implemented by DOD in 1998. As table 1 shows, the type of information gathered in an investigation depends on the level of clearance needed and whether an initial investigation or a reinvestigation is required. DSS forwards the completed investigative report to DISCO. In the adjudicative stage, DISCO uses the information from the investigative report to determine whether an individual is eligible for a security clearance. If the report is determined to be a “clean” case—a case that contains no potential security issue or minor issues—then DISCO adjudicators determine eligibility for a clearance. However, if the case is an “issue” case—a case containing issues that might disqualify an individual for a clearance (e.g., foreign connections or drug- or alcohol-related problems)—then the case is forwarded to DOHA adjudicators for the clearance-eligibility decision. Regardless of which office determines eligibility, DISCO issues the clearance-eligibility decision and forwards this determination to the industrial contractor. All adjudications are based on 13 federal adjudicative guidelines established governmentwide in 1997 and implemented by DOD in 1998. Recent legislation could affect DOD’s security clearance process. The National Defense Authorization Act for Fiscal Year 2004 authorized the transfer of DOD’s personnel security investigative functions and more than 1,800 investigative employees to OPM. However, as of March 31, 2004, this transfer had not taken place. The transfer can occur only after the Secretary of Defense certifies to Congress that certain conditions can be met and the Director of OPM concurs with the transfer. DOD’s security clearance backlog for industry personnel is sizeable, and the average time needed to determine eligibility for a clearance increased during the last 3 fiscal years to over 1 year. DSS has established case-completion time frames for both its investigations and adjudications. For investigations, the time frames range from 75 to 180 days, depending on the investigative requirements. For DISCO adjudications, the time frames are 3 days for initial clearances and 30 days for periodic reinvestigations. DOHA’s time frame is to maintain a steady workload of adjudicating 2,150 cases per month within 30 days of receipt. Cases exceeding these time frames are considered backlogged. Sizeable backlog continues to exist—As of March 31, 2004, the security clearance backlog for industry personnel was roughly 188,000 cases. This estimate is the sum of four separate DSS-supplied estimates: over 61,000 reinvestigations that were overdue but had not been submitted, over 101,000 ongoing DSS investigations, over 19,000 cases awaiting adjudication at DISCO, and more than 6,300 cases awaiting adjudication at DOHA that had exceeded the case-completion time frames established for conducting them. However, as of March 31, 2004, DOHA independently reported that it had eliminated its adjudicative backlog. Moreover, the size of the total DSS-estimated backlog for industry personnel doubled during the 6-month period ending on March 31, 2004, as the comparison in table 2 shows. This comparison does not include the backlog of overdue reinvestigations that have not been submitted because DSS was not able to estimate that backlog as of September 30, 2003. The industry backlogs for investigations and adjudications represent about one-fifth of the DOD-wide backlog for investigations and adjudications as of September 30, 2003 (the date of the most recent DOD-wide data). On that date, the estimated size of the investigative backlog for industry personnel amounted to roughly 44,600 cases, or 17 percent of the larger DOD-wide backlog of approximately 270,000 cases, which included military members, federal employees, and industry personnel. Similarly, the estimated size of the adjudicative backlog for industry personnel totaled roughly 17,300 cases, or 19 percent of the approximately 93,000 cases in the DOD-wide adjudicative backlog on that date. Furthermore, the size of the industrial personnel backlog may be underestimated. In anticipation of the authorized transfer of the investigative function from DSS to OPM, DSS had opened relatively few cases between October 1, 2003, and March 31, 2004. More specifically, DSS had not opened almost 69,200 new industry personnel requests received in the first half of fiscal year 2004. Because these requests have not been opened and investigations begun, they are not part of the 188,000 case backlog identified above. An unknown number of these cases might have already exceeded the set time frames for completing the investigation. Average time to determine clearance eligibility has increased—In the 3-year period from fiscal year 2001 through fiscal year 2003, the average time that DOD took to determine clearance eligibility for industry personnel increased from 319 days to 375 days, an increase of 18 percent. (See table 3.) During fiscal year 2003, DOD took an average of more than 1 year from the time DSS received a personnel security questionnaire to the time it issued an eligibility determination. From fiscal year 2001 through fiscal year 2003, the number of days to determine clearance eligibility for clean cases increased from 301 days to 332 days, whereas the time increased for issue cases from 516 days to 615 days. Backlogs and delays can have adverse effects—Delays in renewing security clearances for industry personnel and others who are doing classified work can lead to a heightened risk of national security breaches. In a 1999 report, the Joint Security Commission II pointed out that delays in initiating reinvestigations create risks to national security because the longer the individuals hold clearances, the more likely they are to be working with critical information and systems. In addition, delays in determining security clearance eligibility for industry personnel can affect the timeliness, quality, and cost of contractor performance on defense contracts. According to a 2003 Information Security Oversight Office report, industrial contractor officials who were interviewed said that delays in obtaining clearances cost industry millions of dollars per year and affect personnel resources. The report also stated that delays in the clearance process hampered industrial contractors’ ability to perform duties required by their contracts and increased the amount of time needed to complete national-security-related contracts. Industrial contractors told us about cases in which their company hired competent applicants who already had the necessary security clearances, rather than individuals who were more experienced or qualified but did not have a clearance. Industry association representatives told us that defense contractors might offer monetary incentives to attract new employees with clearances—for example, a $15,000 to $20,000 signing bonus for individuals with a valid security clearance, and a $10,000 bonus to current employees who recruit a new employee with a clearance. In addition, defense contractors may hire new employees and begin paying them, but not be able to assign any work to them—sometimes for a year or more— until they obtain a clearance. Contractors may also incur lost-opportunity costs if prospective employees decide to work elsewhere rather than wait to get a clearance. A number of impediments hinder DOD’s efforts to eliminate the clearance backlog for industry personnel and reduce the time needed to determine eligibility for a clearance. Impediments include large investigative and adjudicative workloads resulting from a large number of clearance requests in recent years and an increase in the proportion of requests requiring top secret clearances, inaccurate workload projections, and the imbalance between workforces and workloads. The underutilization of reciprocity is an impediment that industrial contractors cited as an obstacle to timely eligibility determinations. Furthermore, DOD does not have a management plan that could help it address many of these impediments in a comprehensive and integrative manner. Large number of clearance requests—The large number of clearance requests that DOD receives annually for industry personnel, military members, and federal employees taxes a process that already is experiencing backlogs and delays. In fiscal year 2003, DOD submitted over 775,000 requests for investigations to DSS and OPM, about one-fifth of which (almost 143,000 requests) were for industry personnel. Table 4 shows an increase in the number of DOD eligibility determinations for industry personnel made during each of the last 3 years. DOD issued about 63,000 more eligibility determinations for industry personnel in fiscal year 2003 than it did 2 years earlier, an increase of 174 percent. During the same period, the average number of days required to issue an eligibility determination for industry personnel grew by 56 days, or about 18 percent. In other words, the increase in the average wait time was small compared to the increase in the number of cases. Increase in the proportion of requests for top secret clearances— From fiscal year 1995 through fiscal year 2003, the proportion of all requests requiring top secret clearances for industry personnel grew from 17 to 27 percent. According to OUSD (I), top secret clearances take eight times more investigative effort to complete and three times more adjudicative effort to review than do secret clearances. The increased demand for top secret clearances also has budget implications for DOD. In fiscal year 2003, security investigations obtained through DSS cost $2,640 for an initial investigation for a top secret clearance, $1,591 for a reinvestigation of a top secret clearance, and $328 for an initial investigation for a secret clearance. Thus, over a 10-year period, DOD would spend $4,231 (in current-year dollars) to investigate and reinvestigate an industry employee for a top secret clearance, a cost 13 times higher than the $328 it would require to investigate an individual for a secret clearance. Inaccurate workload projections—Although DSS has made efforts to improve its projections of industry personnel security clearance requirements, problems remain. For example, inaccurate forecasts for both the number and type of security clearances needed for industry personnel make it difficult for DOD to plan ahead to size its investigative and adjudicative workforce to handle the workload and fund its security clearance program. For fiscal year 2003, DSS reported that the actual cost of industry personnel investigations was almost 25 percent higher than had been projected. DOD officials believed that these projections were inaccurate primarily because DSS received a larger proportion of requests for initial top secret investigations and reinvestigations. Further inaccuracies in projections may result when DOD fully implements a new automated adjudication tracking system, which will identify overdue reinvestigations that have not been submitted DOD-wide. Imbalance between workforces and workloads—Insufficient investigative and adjudicative workforces, given the current and projected workloads, are additional barriers to eliminating the backlog and reducing security clearance processing times for industry personnel. DOD partially concurred with our February 2004 recommendation to identify and implement steps to match the sizes of the investigative and adjudicative workforces to the clearance request workload. According to an OPM official, DOD and OPM together need roughly 8,000 full-time-equivalent investigative staff to eliminate the security clearance backlogs and deliver timely investigations to their customers. In our February 2004 report, we estimated that DOD and OPM have around 4,200 full-time-equivalent investigative staff who are either federal employees or contract investigators. In December 2003, advisors to the OPM Director expressed concerns about financial risks associated with the transfer of DSS’s investigative functions and 1,855 investigative staff authorized in the National Defense Authorization Act for Fiscal Year 2004. The advisors therefore recommended that the transfer not occur, at least during fiscal year 2004. On February 6, 2004, DSS and OPM signed an interagency agreement that leaves the investigative functions and DSS personnel in DOD and provides DSS personnel with training on OPM’s case management system and investigative procedures as well as access to that system. According to our calculations, if all 1,855 DSS investigative employees complete the 1-week training program as planned, the loss in productively will be equivalent to 35 person-years of investigator time. Also, other short-term decreases in productivity will result while DSS’s investigative employees become accustomed to using OPM’s system and procedures. Likewise, an adjudicative backlog of industry personnel cases developed because DISCO and DOHA did not have an adequate number of adjudicative personnel on hand. DISCO and DOHA have, however, taken steps to augment their adjudicative staff. DISCO was recently given the authority to hire 30 adjudicators to supplement its staff of 62 nonsupervisory adjudicators. Similarly, DOHA has supplemented its 23 permanent adjudicators with 46 temporary adjudicators and, more recently, has requested that it be able to hire an appropriate number of additional permanent adjudicators. Reciprocity of access underutilized—While the reciprocity of security clearances within DOD has not been a problem for industry personnel, reciprocity of access to certain types of information and programs within the federal government has not been fully utilized, thereby preventing some industry personnel from working and increasing the workload on already overburdened investigative and adjudicative staff. According to DOD and industry officials, a 2003 Information Security Oversight Office report on the National Industrial Security Program, and our analysis, reciprocity of clearances appears to be working throughout most of DOD. However, the same cannot be said for access to sensitive compartmented information and special access programs within DOD or transferring clearances and access from DOD to some other agencies. Similarly, a recent report by the Defense Personnel Security Research Center concluded that aspects of reciprocity for industrial contractors appear not to work well and that the lack of reciprocity between special access programs was a particular problem for industry personnel, who often work on many of these programs simultaneously. Industry association officials told us that reciprocity of access to certain types of information and programs, especially the lack of full reciprocity in the intelligence community, is becoming more common and one of the top concerns of their members. One association provided us with several examples of access problems that industry personnel with DOD-issued security clearances face when working with intelligence agencies. For example, the association cited different processes and standards used by intelligence agencies, such as guidelines for (1) the type of investigations and required time frames, (2) the type of polygraph tests, and (3) not accepting adjudication decisions made by other agencies. In addition to the reciprocity concerns relating to access to sensitive compartmented information and special access programs, industry officials identified additional reciprocity concerns. First, DSS and contractor association officials told us that some personnel with an interim clearance could not start work because an interim clearance does not provide access to specific types of national security information, such as sensitive compartmented information, special access programs, North Atlantic Treaty Organization data, and restricted data. Second, intelligence agencies do not always accept clearance reinstatements and conversions (e.g., a security clearance may be reactivated depending on the recency of the investigation and the length of time since the clearance was terminated). Third, the Smith Amendment—with exceptions— prohibits an individual with a clearance from being eligible for a subsequent DOD clearance if certain prohibitions (e.g., unlawful user of a controlled substance) are applicable. Lack of overall management plan—Finally, DOD has numerous plans to address pieces of the backlog problem but does not have an overall management plan to eliminate permanently the current investigative and adjudicative backlogs, reduce the delays in determining clearance eligibility for industry personnel, and overcome the impediments that could allow such problems to recur. These plans do not address process wide objectives and outcome-related goals with performance measures, milestones, priorities, budgets, personnel resources, costs, and potential obstacles and options for overcoming the obstacles. DOD and industry association officials have suggested several initiatives to reduce the backlog and delays in issuing eligibility for a security clearance. They indicated that these steps could supplement actions that DOD has implemented in recent years or has agreed to implement as a result of our recommendations or those of others. Even if positive effects would result from these initiatives, other obstacles, such as the need to change investigative standards, coordinate these policy changes with other agencies, and ensure reciprocity, could prevent their implementation or limit their use. Today, I will discuss three of the suggested initiatives. Our final report to you will provide a more complete evaluation of these and other initiatives. Conducting a phased periodic reinvestigation—A phased approach to periodic reinvestigations for top secret clearances involves conducting a reinvestigation in two phases; the second phase would be conducted only if potential security issues were identified in the initial phase. Phase 1 information is obtained through a review of the personnel security questionnaire, subject and former spouse interviews, credit checks, a national agency check on the subject and former spouse or current cohabitant, local agency checks, records checks, and interviews with workplace personnel. If one or more issues are found in phase 1, then phase 2 would include all of the other types of information gathered in the current periodic reinvestigation for a top secret investigation. Recent research has shown that periodic reinvestigations for top secret clearances conducted in two phases can save at least 20 percent of the normal effort with almost no loss in identifying critical issues for adjudication. According to DSS, this initiative is designed to use the limited investigative resources in the most productive manner and reduce clearance-processing time by eliminating the routine use of low-yield information sources on many investigations and concentrating information-gathering efforts on high-yield sources. While analyses have not been conducted to evaluate how the implementation of phasing would affect the investigative backlog, the implementation of phasing could be a factor in reducing the backlog by decreasing some of the hours of fieldwork required in some reinvestigations. Even if additional testing confirms promising earlier findings that the procedure very rarely fails to identify critical issues, several obstacles, such as noncompliance with existing governmentwide investigative standards and reciprocity problems, could prevent the implementation or limit the use of this initiative. Establishing a single adjudicative facility for industry—Under this initiative, DOD would consolidate DOHA’s adjudicative function with that of DISCO’s to create a single adjudicative facility for all industry personnel cases. At the same time, DOHA would retain its hearings and appeals function. According to OUSD (I) officials, this consolidation would streamline the adjudicative process for industry personnel and make it more coherent and uniform. A single adjudicative facility would serve as the clearinghouse for all industrial contractor-related issues. As part of a larger review of DOD’s security clearance processes, DOD’s Senior Executive Council is considering this consolidation. An OUSD (I) official told us that the consolidation would provide greater flexibility in using adjudicators to meet changes in the workload and could eliminate some of the time required to transfer cases from DISCO and to DOHA. If the consolidation occurred, DISCO officials said that their operations would not change much, except for adding adjudicators. On the other hand, DOHA officials said that the current division between DISCO and DOHA of adjudicating clean versus issue cases works very well and that combining the adjudicative function for industry into one facility could negatively affect DOHA’s ability to prepare denials and revocations of industry personnel clearances during appeals. They told us that the consolidation would have very little impact on the timeliness and quality of adjudications. Evaluation of the investigative standards and adjudicative guidelines—This initiative would involve an evaluation of the investigative standards used by personnel security clearance investigators to help identify requirements that do not provide significant information relevant for adjudicative decisions. By eliminating the need to perform certain tasks associated with these requirements, investigative resources could be used more efficiently. For example, DSS officials told us that less than one-half of one percent of the potential security issues identified during an investigation are derived from neighborhood checks; however, this information source accounts for about 14 percent of the investigative time. The modification of existing investigative standards would involve using risk management principles based on a thorough evaluation of the potential loss of information. Like a phased periodic reinvestigation, this initiative would require changes in the governmentwide investigative standards. In addition, the evaluation and any suggested changes would need to be coordinated within DOD, intelligence agencies, and others. Mr. Chairman, this concludes my prepared statement. I will be happy to respond to any questions you or other Members of the committee may have at this time. Individuals making key contributions to this statement include Mark A. Pross, James F. Reid, William J. Rigazio, and Nancy L. Benco. Industrial Security: DOD Cannot Provide Adequate Assurances That Its Oversight Ensures the Protection of Classified Information. GAO-04-332. Washington, D.C.: March 3, 2004. DOD Personnel Clearances: DOD Needs to Overcome Impediments to Eliminating Backlog and Determining Its Size. GAO-04-344. Washington, D.C.: February 9, 2004. DOD Personnel: More Consistency Needed in Determining Eligibility for Top Secret Security Clearances. GAO-01-465. Washington, D.C.: April 18, 2001. DOD Personnel: More Accurate Estimate of Overdue Security Clearance Reinvestigation Is Needed. GAO/T-NSIAD-00-246. Washington, D.C.: September 20, 2000. DOD Personnel: More Actions Needed to Address Backlog of Security Clearance Reinvestigations. GAO/NSIAD-00-215. Washington, D.C.: August 24, 2000. DOD Personnel: Weaknesses in Security Investigation Program Are Being Addressed. GAO/T-NSIAD-00-148. Washington, D.C.: April 6, 2000. DOD Personnel: Inadequate Personnel Security Investigations Pose National Security Risks. GAO/T-NSIAD-00-65. Washington, D.C.: February 16, 2000. DOD Personnel: Inadequate Personnel Security Investigations Pose National Security Risks. GAO/NSIAD-00-12. Washington, D.C.: October 27, 1999. Background Investigations: Program Deficiencies May Lead DEA to Relinquish Its Authority to OPM. GAO/GGD-99-173. Washington, D.C.: September 7, 1999. Military Recruiting: New Initiatives Could Improve Criminal History Screening. GAO/NSIAD-99-53. Washington, D.C.: February 23, 1999. Executive Office of the President: Procedures for Acquiring Access to and Safeguarding Intelligence Information. GAO/NSIAD-98-245. Washington, D.C.: September 30, 1998. Privatization of OPM’s Investigations Service. GAO/GGD-96-97R. Washington, D.C.: August 22, 1996. Cost Analysis: Privatizing OPM Investigations. GAO/GGD-96-121R. Washington, D.C.: July 5, 1996. Personnel Security: Pass and Security Clearance Data for the Executive Office of the President. GAO/NSIAD-96-20. Washington, D.C.: October 19, 1995. Privatizing OPM Investigations: Perspectives on OPM’s Role in Background Investigations. GAO/T-GGD-95-185. Washington, D.C.: June 14, 1995. Background Investigations: Impediments to Consolidating Investigations and Adjudicative Functions. GAO/NSIAD-95-101. Washington, D.C.: March 24, 1995. Security Clearances: Consideration of Sexual Orientation in the Clearance Process. GAO/NSIAD-95-21. Washington, D.C.: March 24, 1995. Personnel Security Investigations. GAO/NSIAD-94-135R. Washington, D.C.: March 4, 1994. Nuclear Security: DOE’s Progress on Reducing Its Security Clearance Work Load. GAO/RCED-93-183. Washington, D.C.: August 12, 1993. Personnel Security: Efforts by DOD and DOE to Eliminate Duplicative Background Investigations. GAO/RCED-93-23. Washington, D.C.: May 10, 1993. DOD Special Access Programs: Administrative Due Process Not Provided When Access Is Denied or Revoked. GAO/NSIAD-93-162. Washington, D.C.: May 5, 1993. Administrative Due Process: Denials and Revocations of Security Clearances and Access to Special Programs. GAO/T-NSIAD-93-14. Washington, D.C.: May 5, 1993. Security Clearances: Due Process for Denials and Revocations by Defense, Energy, and State. GAO/NSIAD-92-99. Washington, D.C.: May 6, 1992. Due Process: Procedures for Unfavorable Suitability and Security Clearance Actions. GAO/NSIAD-90-97FS. Washington, D.C.: April 23, 1990. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Because of increased awareness of threats to national security and efforts to privatize federal jobs, the demand for security clearances for government and industry personnel has increased. Industry personnel are taking on a greater role in national security work for the Department of Defense (DOD) and other federal agencies. Because many of these jobs require access to classified information, industry personnel need security clearances. As of September 30, 2003, industry workers held about one-third of the approximately 2 million DOD-issued security clearances. Terrorist attacks have heightened national security concerns and underscored the need for a timely, high-quality personnel security clearance process. However, GAO's past work found that DOD had a clearance backlog and other problems with its process. GAO was asked to review the clearance eligibility determination process and backlog for industry personnel. This testimony presents our preliminary observations on the security clearance process for industry personnel and describes (1) the size of the backlog and changes in the time needed to issue eligibility determinations, (2) the impediments to reducing the backlog and delays, and (3) some of the initiatives that DOD is considering to eliminate the backlog and decrease the delays. Later this month, we plan to issue our final report. On the basis of our preliminary observations, long-standing backlogs and delays in determining security clearance eligibility for industry personnel continue to exist and can have adverse effects. DOD's security clearance backlog for industry personnel was roughly 188,000 cases as of March 31, 2004. The backlog included estimates by the Defense Security Service (DSS)--the agency responsible for administering DOD's personnel security investigations program--that consisted of more than 61,000 reinvestigations (required for renewing clearances) that were overdue but had not been submitted to DSS; over 101,000 new DSS investigations or reinvestigations that had not been completed within DOD's established time frames; and over 25,000 cases awaiting adjudication (a determination of clearance eligibility) that had not been completed within DOD's established time frames. From fiscal year 2001 through fiscal year 2003, the average time that it took DOD to determine clearance eligibility for industry personnel increased by 56 days to over 1 year. Delays in completing reinvestigations of industry personnel and others doing classified work can increase national security risks. In addition, delays in determining clearance eligibility can affect the timeliness, quality, and cost of contractor performance on defense contracts. Several impediments hinder DOD's ability to eliminate the backlog and decrease the amount of time needed to determine clearance eligibility for industry personnel. Impediments include a large number of new clearance requests; an increase in the proportion of requests for top secret clearances, which require more time to process; inaccurate workload projections for both the number and type of clearances needed for industry personnel; and the imbalance between workforces and workloads. Industrial contractors cited the lack of full reciprocity (the acceptance of a clearance and access granted by another department, agency, or military service) as an obstacle that can cause industry delays in filling positions and starting work on government contracts. Furthermore, DOD does not have an integrated, comprehensive management plan for addressing the backlog and delays. DOD is considering a number of initiatives to supplement actions that it has implemented in recent years to reduce the backlogs and the time needed to determine eligibility for a security clearance. Additional initiatives include (1) conducting a phased, periodic reinvestigation; (2) establishing a single adjudicative facility for industry; and (3) reevaluating investigative standards and adjudicative guidelines. GAO's forthcoming report will provide a more complete discussion of these and other initiatives.
Historically, people with SMI were cared for primarily in hospitals. States developed a system of public mental hospitals, but by the 1960s they were viewed as ineffective and inadequate because of overcrowding, staff shortages, and poor facilities. Advocates and reformers contended that long-term institutional care in the hospitals had been characterized by patient neglect and ineffective treatment. Improved medications that reduced some of the symptoms of mental illness allowed more people to live in the community with support. Certain legislative and judicial actions contributed to a changed focus of providing community-based rather than institutional care. In 1963, the Community Mental Health Centers Act authorized the development of a nationwide network of community mental health centers (CMHC) to replace state institutions as the main source of treatment for people with SMI and to decrease the incidence of mental illness in the broader population.The act and amendments created federal grants for states to build the CMHCs and staff them for 8 years. Funds were intended to supplement existing state and local revenues to help communities develop the new services necessary for adequate community mental health care. States and communities were expected to develop alternative funding sources to eventually replace the federal funds. CMHCs were required to provide a number of services, including inpatient, outpatient, emergency, and day care services; follow-up care for people released from mental health facilities; and transitional living facilities. CMHCs were also required to coordinate service delivery with other mental health and social service providers in the community. The vision of a national network of community mental health centers was not fulfilled. Many communities were unable to find the funds to match federal dollars to build the CMHCs or to provide all the required services; others were unable to find qualified professionals to staff the centers.As of 1980, only 768 of the projected 2,000 CMHCs had been funded. Moreover, implementation of the CMHC act did not adequately address the needs of people with SMI who were released from institutions. The CMHC program’s regulations emphasized the prevention and treatment of mental disorders in the broader population, and CMHCs did not provide the intensive, more comprehensive services people with SMI required, such as housing, support services, and vocational opportunities in addition to treatment. Medication was the only service provided to many patients. Further, the extent to which CMHCs coordinated with mental hospitals concerning the release of patients to their communities varied. Section 901 of the Omnibus Budget Reconciliation Act of 1981 ended federal funding to states specifically for community mental health centers and replaced it with block grants to the states to support services for people with SMI. A series of court decisions in the 1970s establishing that institutionalization is a deprivation of liberty also played a role in moving people with SMI away from institutions into the community. States had previously exercised broad latitude in allowing an individual with mental illness to be involuntarily confined, but court rulings recognizing individuals’ right to refuse treatment made it difficult to commit people to a psychiatric hospital without their consent. In 1975, the Supreme Court held that mentally ill individuals could not be committed involuntarily unless they were found to be dangerous to themselves or others.This led to a reform of state laws, which now generally allow involuntary inpatient commitment only if persons present a clear danger or threat of substantial harm to themselves or others. Some state laws specify that inpatient commitment is appropriate only after full consideration of less restrictive alternatives, such as involuntary outpatient commitment. (See app. II for a discussion of involuntary outpatient commitment.) A recent Supreme Court opinion has brought additional pressure on states to offer community-based treatment to people with mental illness when such treatment is appropriate, the individuals do not oppose such treatment, and the placement can be reasonably accommodated, taking into account the state’s resources. The public mental hospital population declined. Many people with SMI returned to communities without adequate mental health services and some of these people became homeless. Other major factors contributing to homelessness were unemployment, a decline in the supply of low- income housing, and alcohol and drug abuse. State mental health agencies (SMHA) have primary responsibility for administering the public mental health system, through their role as a purchaser, regulator, manager, and, at times, provider of mental health care. The public mental health system serves as a safety net for people who are poor or uninsured or whose private insurance benefits run out in the course of their serious mental illness. Many people with SMI are unemployed, and they are often poor and financially dependent on government support. SMHAs arrange for the delivery of services to more than 2 million people each year, most of whom suffer from a serious mental illness. Services are delivered by state-operated or county-operated facilities, nonprofit organizations, and other private providers. The sources and amounts of public funds SMHAs administer vary from state to state but usually include state general revenues and federal funds. The federal funds that SMHAs administer generally include Medicaid and Medicare payments for services provided in state-owned or state-operated facilities and other Medicaid payments when the state Medicaid agency has authorized the SMHA to control all Medicaid expenditures for mental health services. HCFA’s Medicaid and Medicare programs pay for certain mental health services for eligible beneficiaries. States operate their own Medicaid programs within broad federal requirements. Medicaid pays for mandatory services, such as physician services, and optional benefits that states may choose to provide, such as rehabilitation and targeted case management. Since Medicaid is an entitlement program, states and the federal government are obligated to pay for all covered services that are provided to an eligible individual.Each state program’s federal and state funding share is determined through a statutory matching formula, with the federal share ranging from 50 to 80 percent. In the 1990s, state Medicaid programs increasingly turned to capitated managed care plans to provide medical and behavioral health services as a way to control costs and improve services. Twenty-two states have “carved out,” or separated, mental health services from physical health services in contracting with managed care plans, placing them under separate financing and administrative arrangements. Some states create separate capitated arrangements and others use fee-for-service arrangements. Medicare covers elderly persons and persons who receive Social Security Disability Insurance, and it pays for a range of inpatient and outpatient mental health services.The Medicare statute requires a 50-percent co- payment from beneficiaries for outpatient care of mental disorders, compared with 20 percent for other medical outpatient treatment.Furthermore, the Medicare statute limits treatment in a freestanding psychiatric hospital to a total of 190 days in a patient’s lifetime. SMHAs also administer the funds they receive from SAMHSA’s Community Mental Health Services Block Grant program. Block grants are allocated to states according to a statutory formula that takes into account each state’s taxable resources, personal income, population, and service costs. The grants give states and territories a flexible funding source for providing a broad spectrum of community mental health services to adults with SMI and children with a serious emotional disturbance. Funding for the block grant program totaled $356 million in fiscal year 2000; SAMHSA used about $18 million for state systems development, including technical assistance, data collection, and evaluation. The remainder was awarded to the states and territories, with an average award of about $5.7 million. (See app. III for other SAMHSA programs that help implement community-based mental health services.) In 1997, the nation spent about $73 billion for the treatment of all mental illness, up from $37 billion in 1987.Mental health spending grew at about the same rate as overall health spending during this period. After adjusting for overall inflation, spending for all health care grew by 5 percent a year, on average, compared with 4 percent for spending on mental health services.In 1997, the public sector (that is, federal, state, and local governments) provided 55 percent of mental health spending, in contrast to providing less than half (about 46 percent) of overall health care spending. From 1987 to 1997, adjusted annual federal spending for mental health grew, on average, more than twice as fast as state and local mental health spending (6.3 percent versus 2.4 percent). This led to the federal government’s share of total mental health expenditures increasing from 22 to 28 percent during the period, while state and local governments’ share of spending declined from 31 to 27 percent.The proportion from private spending sources also declined slightly from 46 to 45 percent (see fig. 1). Medicaid and Medicare played increasingly important roles in funding mental health services between 1987 and 1997. Medicaid’s proportion of mental health spending (federal and state) rose from slightly more than 15 percent ($5.7 billion) to about 20 percent ($14.4 billion). Medicare’s share rose from 8 percent to slightly more than 12 percent, with expenditures increasing from about $3 billion to $9 billion. HCFA and SAMHSA officials have suggested several reasons for Medicaid’s increase. These include the trend toward Medicaid beneficiaries receiving their inpatient care in psychiatric units of general hospitals, where services are covered by Medicaid, rather than in psychiatric hospitals, where services are not covered; increased costs for psychiatric medications; and states’ increased use of Medicaid to pay for community-based mental health services. The increase in Medicare spending may be associated in part with a 1990 statutory change that expanded coverage to nonphysician professionals providing mental health services, such as psychologists, clinical social workers, and nurse practitioners. Over the past 20 years, states have largely shifted the care of people with SMI from institutions to the community. The continued development of psychotropic medications that both are more effective and produce fewer side effects has facilitated the ability to care for more people with SMI in the community. Furthermore, treatment approaches such as ACT, supported employment, and supportive housing can provide the multiple forms of ongoing assistance that adults with SMI often need to function. These approaches can also help homeless people with SMI, who have particularly complex treatment needs and who often have difficulty gaining access to the multiple services they need. Integration and coordination of services have been found to be effective in treating people with multiple needs. The focus of mental health services for people with SMI has continued to shift from providing care in psychiatric hospitals to providing community- based care. From 1980 to 1998, the number of patients institutionalized in state and county mental hospitals decreased by almost 60 percent by the end of 1998, about 57,000 people were in state or county psychiatric hospitals.Although nationwide expenditure data are not available, data from 33 states show that state mental health agencies’ expenditures for psychiatric hospitals dropped from 52 percent to 35 percent of total expenditures between 1987 and 1997, while community-based spending rose from 45 percent to 63 percent. The continued development of new antidepressant and antipsychotic medications has helped make it possible to care for more people with SMI in the community. The newer medications further improve the ability of people with SMI to live in the community, receive care at a general hospital or in other clinical settings, and manage symptoms of their illness. The Surgeon General recently reported, for example, that the newer antipsychotic medications show promise for treating people with schizophrenia for whom older medications are ineffective, by reducing symptoms such as delusions, hallucinations, disorganized speech and thinking, and catatonic behaviors.Further, the Surgeon General reported that some of the newer drugs carry fewer and less severe side effects, generally resulting in better compliance with medication regimens, and that they may improve a person’s quality of life and responsiveness to other treatment interventions. Patients using certain medications, however, require careful monitoring to ensure that they are receiving the appropriate dose and to minimize side effects. For example, in about 1 percent of patients, clozapine causes agranulocytosis, a potentially fatal loss of white blood cells that fight infection. Because this condition is reversible if detected early, weekly blood monitoring is critical. States have supported an array of community-based services that are designed to enable people with SMI to remain in their communities and live independently. States frequently provide services directly or contract with county or community mental health organizations to offer services. Although most care is provided on an outpatient basis, people with SMI sometimes experience periods when they are unable to care for themselves and need short-term hospitalization. Table 1 describes types of mental health services for adults with SMI provided in the community. Many people with SMI need a range of services to help them function in the community. Several approaches to providing ongoing assistance and coordinated services have been developed to meet the varying needs of this population, such as ACT, supported employment, and supportive housing. ACT is a model of providing intensive care to people with the most severe and persistent mental illness. It is generally targeted toward people who have recently left institutions, typically do not schedule or keep appointments, or do not do well without extensive support. Under the ACT model, multidisciplinary teams are to be available to provide services around the clock in community settings, such as at the person’s home. Services can include administering medications, interpersonal skills training, providing crisis intervention, and providing employment assistance and are intended to be available as long as the person needs them. Supported employment programs assist people who have SMI to work in competitive jobs. Some supported employment programs emphasize quick placement into regular jobs, rather than training people before job placement, and then help enable individuals with SMI to perform acceptably in their jobs. Supportive housing programs attempt to address the needs of people with SMI who have been homeless or who are at risk of becoming homeless by combining housing with other needed services, such as case management and substance abuse treatment. (For more detailed information on ACT, supported employment, and supportive housing, see app. IV.) Approximately 1 in 20 adults with SMI are homeless; they account for an estimated one-third of the approximately 600,000 homeless adults in the United States. At least half of homeless people with SMI also have substance abuse disorders. Mental illness in combination with substance abuse may predispose individuals to homelessness, as their conditions often lead to disruptive behavior, loss of social supports, financial problems, and an inability to maintain stable housing. Homelessness adds to the complexity of treatment needs for people with SMI; beyond mental health services, they need a range of physical health, housing, and social services. Compared with other homeless people, those with SMI are generally in poorer physical health, are homeless for longer periods of time, and often reside on the streets. Homeless people with SMI have difficulty gaining access to the full range of health care, housing, and support services they need. Typically, they lack the income verification documentation necessary to enroll in entitlement programs, such as Medicaid; they have problems maintaining schedules; and they lack transportation. The Department of Housing and Urban Development (HUD) funds programs, including rental assistance and housing development grants, that have been used to help homeless people with SMI obtain housing. (See app. V.) Researchers and experts widely agree that the demand for low-income housing and housing subsidies far exceeds the supply. According to the National Coalition for the Homeless, many traditional mental health providers are neither equipped to handle the complex social and health conditions of homeless people nor typically linked to the range of services needed for their recovery and residential stability. Traditionally, separate systems have provided these such as the mental health, substance abuse, public housing, and each of which has its own eligibility and program requirements. It is particularly difficult for people with SMI to negotiate systems in which services are separate and uncoordinated. Research indicates that coordinated service delivery is important for meeting the numerous and complex needs of homeless people with SMI.One study found that homeless people with SMI who participated in programs using an integrated treatment approach—in which multiple services were provided through a single entity—spent more days in stable housing (such as an apartment or group home) and reduced their alcohol use more than those receiving services through multiple agencies.SAMHSA’s Access to Community Care and Effective Services and Supports an interdepartmental demonstration program integrating housing, mental health, substance abuse, employment, and social support found that service system integration was associated with improved access to housing services and better housing outcomes for homeless people with mental illness. Efforts are under way to coordinate services to reduce the number of homeless people with SMI who become incarcerated.SAMHSA is funding a study of programs for diverting adults with mental illness and substance abuse problems from the criminal justice system to community-based treatment. According to SAMHSA, diversion programs are often the most effective way to integrate an array of mental health, substance abuse, and other support services to help people break the cycle of repeated incarceration. In some communities, mental health courts are designed to hear the cases of people with mental illness who are arrested for misdemeanors such as loitering or creating a public nuisance. In these programs, people with mental illness can have their case heard by the mental health court and can agree to follow a plan of mental health treatment and services instead of going to jail. HCFA has disseminated information to states about the more effective medications and treatments for adults with SMI and has supported states’ use of Medicaid managed mental health care to provide a wider array of services not covered by traditional fee-for-service Medicaid. HCFA is developing safeguards to help ensure that states that use managed care arrangements furnish appropriate services to people with special health care needs, including people with SMI. HCFA has taken steps to encourage states to use new modes of care for adults with SMI. In June 1999, HCFA issued a letter to state Medicaid directors noting that research had demonstrated that ACT is an effective strategy for treating persons with SMI. The letter stated that states should consider these positive findings in their plans for comprehensive approaches to community-based mental health services. HCFA has also encouraged the use of newer medications. In a letter to state Medicaid programs in 1998, it provided information on the effectiveness of new antipsychotic medications in treating schizophrenia. HCFA noted that some states and managed care organizations with formularies have already adjusted them to recognize these new medications.HCFA suggested that all states consider the medications’ advantages in reducing side effects, increasing patient compliance with treatment regimens, and possibly reducing psychiatric hospital readmissions. HCFA has used its waiver authorities to support some states’ initiatives to use Medicaid managed care carveout programs to enhance their provision of mental health services. With a waiver, states may gain the opportunity to provide some community-based mental health services that are not usually covered by fee-for-service Medicaid, provided they do not increase overall spending. For example, while many ACT program services can be reimbursed under existing Medicaid policies, some services, such as family counseling and respite care, are typically not reimbursable through Medicaid’s traditional fee-for-service program. A survey of states with mental health carveout waivers found that some states did use the waiver to add coverage for services not previously included in their Medicaid plans, most frequently psychiatric rehabilitation and case management. As HCFA has noted in a draft report on strengthening Medicaid managed care, managed care organizations are often not accustomed to serving people with special health needs, such as adults with SMI, and may lack the expertise and provider networks required for treating them appropriately.Moreover, while managed care arrangements can provide greater flexibility in the design and development of individualized services, capitated payment arrangements create incentives to limit access and underserve enrollees. In a previous study of Medicaid managed mental health care, we found that HCFA had provided limited oversight of mental health managed care carveouts.Most monitoring occurred when the waiver application was made or renewed, and it varied in content and intensity across HCFA’s regional offices. This stemmed in large part from a lack of central office guidance on the type of program monitoring and oversight that HCFA staff should perform. HCFA officials told us that the agency has recently revised the monitoring guide that regional offices use when conducting site visits of managed care programs, including those that provide services to people with SMI.In addition, SAMHSA now reviews all waiver applications to help HCFA ensure that waiver applications appropriately address issues such as the capacity of the proposed delivery system, the array of benefits covered, and quality of care. Recognizing the risks for vulnerable individuals with special health care needs, the Congress in the BBA required HCFA to determine what safeguards may be necessary to ensure that the needs of these individuals who are enrolled in Medicaid managed care organizations are adequately met. HCFA’s draft report in response to its BBA mandate contains a series of recommendations for HCFA, states, and managed care organizations regarding safeguards to help ensure that adults with SMI obtain needed services. HCFA recommends, for example, that states take steps to ensure that necessary services and supports are reasonably available to beneficiaries whose ability to function depends on receiving them. For example, HCFA suggests that states require in their contracts that managed care organizations’ medical necessity decisions not always require improvement or restoration of functioning but may also provide for services needed to maintain functioning or compensate for loss of functioning. The draft indicates that HCFA intends to develop plans to implement its recommended safeguards, such as through legislative or regulatory action or changes in Medicaid administrative policies. HCFA has taken comparable action to protect children with special needs, another vulnerable population, when they are enrolled in state Medicaid managed care programs. HCFA developed interim review criteria with mandatory safeguards, which the agency plans to use to review state waiver applications that include these children in managed care. As people with SMI increasingly receive their care in the community, it is important that they have access to the variety of mental health and other services they need. Because of the nature of SMI, people with this condition are often poor and must rely on the public mental health system for their care. Recently, states have stepped up their efforts to provide community-based services that give ongoing support to adults with SMI. These services are especially critical for people making the transition from institutions to the community, to help prevent their becoming homeless or returning to institutions. Homeless people with SMI especially need to receive a range of mental health, substance abuse, social support, and housing services to function in the community, and it is important for providers to link these services effectively. The use of managed mental health care by some state Medicaid programs has resulted in the flexibility to provide a wider array of services. However, given the potential for managed care providers reducing access to needed services, it is important for HCFA and state Medicaid programs to ensure that beneficiaries enrolled in managed care receive appropriate care. HCFA’s current effort to identify safeguards recognizes the importance of people with SMI receiving the necessary services and continuity of care that are fundamental to their well-being. The agency has indicated that it will devise a set of actions to implement these recommended safeguards. Identifying the appropriate actions and effectively implementing them will be essential if the safeguards are to provide meaningful protection to this vulnerable population. We provided a draft of this report to SAMHSA and HCFA for comment. SAMHSA generally agreed with the report’s information on community- based mental health services for people with SMI. SAMHSA noted two developments that it considers important an increase in the number of people with SMI who are treated in the criminal justice system because of inadequate resources for community mental health supports and states’ support of consumer-run services and increasing solicitation of consumers’ views on the delivery of community-based services. We did not evaluate the link between the number of people with SMI treated in the criminal justice system and the adequacy of community mental health resources or assess the participation of people with SMI in the operation of community-based services. In its technical comments, SAMHSA highlighted several efforts on which SAMHSA and HCFA work collaboratively. For example, SAMHSA staff have accompanied HCFA staff on site visits to monitor various states’ waiver programs, and a joint workgroup is developing indicators that states can use to predict problems or ensure success in their managed care programs. In its comments on the draft report, HCFA summarized additional efforts by the Medicaid and Medicare programs to serve the needs of people with SMI. For example, HCFA has made grant money available for states to test demonstration projects that focus on removing barriers to employment for people with disabilities, including people with SMI. SAMHSA and HCFA provided technical comments, which we incorporated where appropriate. (SAMHSA’s and HCFA’s comments are in apps. VI and VII.) We are sending copies of this report to the Honorable Donna E. Shalala, Secretary of HHS; the Honorable Joseph Autry, Acting Administrator of SAMHSA; the Honorable Robert A. Berenson, Acting Administrator of HCFA; officials of the state mental health and Medicaid agencies we visited; appropriate congressional committees; and others who are interested. We will also make copies available to others on request. If you or your staffs have any questions, please contact me at (202) 512- 7119. An additional GAO contact and the names of other staff who made major contributions to this report are listed in appendix VIII. To do our work, we interviewed officials at the Health Care Financing Administration (HCFA), the Substance Abuse and Mental Health Services Administration (SAMHSA), the National Institute of Mental Health (NIMH), and the National Association of State Mental Health Program Directors (NASMHPD), and we reviewed documents such as SAMHSA’s National ExpendituresforMentalHealthandSubstanceAbuseTreatment1997, SAMHSA’s Center for Mental Health Services 1998 Survey of Mental Health Organizations and General Hospitals with Separate Psychiatric Services, and NASMHPD reports and data regarding the funding sources and expenditures of state mental health agencies. Although other federal agencies, such as the Department of Defense and the Veterans Administration, provide services to people with mental illness, we generally restricted our scope at the federal level to the Department of Health and Human Services (HHS) because HHS programs account for most federal mental health spending. We conducted site visits to Michigan and New Hampshire, where we interviewed state mental health and Medicaid officials and administrators of selected treatment programs. We selected these states for site visits because experts identified them as implementing exemplary programs. We also reviewed several states’ Center for Mental Health Services monitoring reports, annual implementation reports, and Community Mental Health Services Block Grant applications. We also reviewed relevant literature and obtained information from individual experts as well as a number of organizations interested in mental health issues such as the American Psychiatric Association (APA), the American Psychological Association, the Bazelon Center for Mental Health Law, the International Association of Psychosocial Rehabilitation Services, the National Alliance for the Mentally Ill, the National Mental Health Association, and the Treatment Advocacy Center. We conducted our work between May and November 2000 in accordance with generally accepted government auditing standards. Most states have laws authorizing involuntary outpatient commitment, also referred to as mandatory or assisted outpatient treatment. APA defines mandatory outpatient treatment as court-ordered outpatient treatment for patients who suffer from severe mental illness (SMI) and who are unlikely to comply with such treatment without a court order.APA considers this a preventive treatment for people who do not meet criteria for inpatient commitment and who need treatment in order to prevent relapse or deterioration that would predictably lead to their meeting inpatient commitment criteria in the foreseeable future. Some states have adopted standards for involuntary outpatient commitment that reflect this approach, but most have adopted the criterion of individuals presenting danger to themselves or others, the same standard they use for involuntary inpatient commitment. Mandatory outpatient treatment may also be used as part of a discharge plan for persons leaving inpatient facilities or as an alternative to hospitalization. Although 41 states and the District of Columbia have adopted involuntary outpatient commitment laws, they are rarely used in many of these states. The approach of using involuntary outpatient commitment has generated some controversy.People who support it believe that it helps ensure treatment for people who need services but whose very illness prevents them from recognizing their need, thus enabling them to remain in the community instead of deteriorating in ways that could result in their being institutionalized. Those who oppose it are concerned that it threatens civil liberties, diverts scarce resources, and undermines the relationship between people with mental illness and service providers. Some states have preferred to take other approaches, such as the use of advance directives. These legal documents allow individuals to express their choices about mental health treatment or appoint someone to make mental health care decisions for them in case they become incapable of making their own decisions. Awards community groups grants of less than $150,000 to sponsor a best practice targeted toward adults with SMI or adolescents and children with serious emotional disorders. An eight-site demonstration program to learn about the most effective approaches for helping adults with SMI find and maintain competitive employment. Knowledge Exchange Network Uses various media to provide information about mental health to users of mental health services, their families, the general public, policymakers, providers, and researchers. A partnership with the National Institute of Corrections, the Office of Justice Programs, and the Office of Juvenile Justice and Delinquency Prevention, this program collects information about effective mental health and substance abuse services for people with co-occurring disorders who come in contact with the justice system and disseminates it to states, localities, and criminal justice and provider organizations. Its goals include assessing which services work for which people, interpreting information, putting it into a useful form, and stimulating the use and application of information. A nine-site program to examine the relative effectiveness of pre- and post-booking diversion to community-based services for people with mental illness and substance abuse disorders in the justice system. A demonstration program that is testing the hypothesis that integrating fragmented service systems will substantially help end homelessness among people with SMI. Annual formula grant that provides states and territories with a flexible funding source specifically to serve homeless individuals with SMI, including those with substance abuse problems. The program is designed to provide services that will enable homeless people with a mental disorder to find appropriate housing and mental health treatment. Eight-site program to evaluate the extent to which services operated by people with SMI are effective in improving outcomes of adults with SMI when used as an adjunct to traditional mental health services. The development of varied community-based treatment models has increased the ability to meet the complex needs of adults with SMI. Following are descriptions of several approaches and examples of how they are implemented in New Hampshire and Michigan. Assertive community treatment (ACT) is designed to provide comprehensive community-based services to people with SMI. ACT is intended for people with the most severe and persistent illnesses, including schizophrenia and bipolar disorders. It is also appropriate for persons who are significantly disabled by other disorders and have not been helped by traditional mental health services. Experts report that ACT is a good approach for people with SMI who have recently left institutions, typically do not schedule or keep appointments, or do not do well without a lot of support. ACT programs use a variety of treatment and rehabilitation practices, including medications; behaviorally oriented skill teaching; crisis intervention; support, education, and skill teaching for family members; supportive therapy; cognitive-behavioral therapy; group treatment; and supported employment. Under the ACT model, services are delivered by a mobile, multidisciplinary treatment team. Unlike traditional case management, in which the case manager often brokers services that others provide, the ACT staff are to work as a team to provide services directly. These services are to be available 24 hours a day, 365 days a year. The majority of ACT services are to be provided in the community, including the person’s home, employment site, or places of recreation rather than in an office setting. The treatment team is to adapt and individually tailor interventions to meet the specific needs of the person with SMI rather than requiring the person to adapt to the team or the rules of a treatment program. Under the ACT model, services are to be designed to continue indefinitely, as needed. In order to provide the type and intensity of services required, ACT, as a program model, has a number of staffing requirements. First, the ACT team typically includes 10 to 12 mental health professionals, depending on the number needed to be able to provide services around the clock. All teams have a full-time leader or supervisor, a psychiatrist, a peer specialist, and a program assistant. ACT programs are designed to have a ratio of no more than 10 clients for each staff person, not counting the psychiatrist and program assistant. As a result, the typical maximum caseload is 120 for urban teams and 80 for rural teams. A provider we visited in New Hampshire operates three types of ACT teams. Two of these teams, one of which works exclusively with people who have both mental illness and a substance abuse disorder, are designed for people with SMI who generally reject treatment and need care available to them around the clock. These teams do not routinely operate in the evenings or on weekends, but staff are on call at all times. People are moved from these programs as their need for intensive services decreases, partly because the programs are very expensive to operate. The third team operates during normal business hours and is designed for individuals who have been institutionalized but accept treatment and do not require 24-hour care. Michigan offers ACT services statewide. Its program delivers a comprehensive set of treatment, rehabilitation, and support services to persons with SMI through a team-based outreach approach. A provider we visited in Michigan offers ACT services to persons who have been repeatedly hospitalized and who have failed to become stabilized on their medications. The provider generally does not offer ACT services until less intensive services have been tried and have failed. After 15 years of operation, about 65 to 70 percent of the original participants continue to receive ACT services. Studies have found that ACT may be associated with reduced hospital admissions, shorter hospital stays, better social functioning, greater housing stability, fewer days homeless, and fewer symptoms of thought disorder and unusual activity.Studies have also found that ACT services cost less than other services, especially inpatient and emergency room care. Supported employment is an approach to help people with SMI succeed in regular work settings by providing them ongoing training and support as needed. In supported employment, participants generally earn money for their work (usually at the prevailing wage) and work as regular employees alongside nondisabled employees (not segregated with other employees with disabilities, either mental or physical). Individual Placement and Support (IPS), the most studied supported employment approach, focuses on finding adults paid work in regular work settings and providing them training and support as long as necessary after placement, in contrast to more traditional approaches that provide testing, counseling, training, and trial work experiences before they seek competitive employment. IPS focuses on integrating clinical and vocational services, performing minimal preliminary assessments, conducting rapid job searches, matching people with jobs of their choice, and providing ongoing supports, such as helping with transportation or finding a substitute for the position if the person is having trouble with illness symptoms. Studies have found that participants in IPS programs have had higher employment rates than people involved in traditional programs. For example, an early study of IPS found that 56 percent of IPS participants had competitive jobs during their first year in the program, compared with 9 percent of those who stayed in a day treatment program that emphasized skills training groups, socialization groups, and sheltered work within the mental health center. The provider we visited in New Hampshire began offering IPS in 1995 because staff found it was effective at getting persons with SMI back to work. Further, they had earlier found that participants were not able to apply the skills learned in the provider’s prior sheltered vocational training program to jobs outside that sheltered environment. The provider serves 225 people at a time in its IPS program and told us that about half of those have jobs at any given time. Supportive housing addresses the needs of people with SMI who are homeless or at risk of becoming homeless. This approach combines housing with access to services and supports, such as case management services, substance abuse treatment, employment assistance, and daily living supports. Supportive housing refers to a range of housing interventions that can be transitional or permanent. Transitional housing is typically group housing, where the person can live for a predetermined period of time, with services and supports provided on-site. Permanent supportive housing, which includes single room occupancy hotels and apartments, has no predetermined time limits and generally includes access to services in the community. There appears to be no single housing model that is most effective for people with SMI. Experts have stated that linking housing and supportive services is crucial for helping people with SMI live independently and that, because of the varying needs of people with SMI who are homeless, a range of housing and service options is necessary. Provides rental assistance to very low-income families, elderly persons, and disabled persons for decent, safe, and sanitary housing in the private market. Provides rental assistance to homeless individuals to obtain permanent housing in single-room occupancy units. Provides rental assistance, together with supportive services funded from other federal, state, local, and private sources, to homeless people with disabilities. Program grants provide rental assistance payments through (1) tenant-based rental assistance, (2) sponsor-based rental assistance, (3) building owner-based rental assistance, or (4) single room occupancy assistance. Provides grants to states, local governmental entities, private nonprofit organizations, and community mental health associations to develop supportive housing and supportive services to assist homeless persons in the transition from homelessness and to enable them to live as independently as possible. Program funds may provide (1) transitional housing, (2) permanent housing for homeless persons with disabilities, (3) supportive services for homeless persons not living in supportive housing, (4) housing that is, or is a part of, an innovative development of alternative methods designed to meet the long-term needs of homeless persons, and (5) safe havens. Other major contributors to this report were Renalyn Cuadro, Nila Garces- Osorio, Brenda R. James, Janina R. Johnson, Carolyn Feis Korman, and Craig Winslow. The first copy of each GAO report is free. Additional copies of reports are $2 each. A check or money order should be made out to the Superintendent of Documents. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. Ordersbymail: U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Ordersbyvisiting: Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Ordersbyphone: (202) 512-6000 fax: (202) 512-6061 TDD (202) 512-2537 Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. Web site: http://www.gao.gov/fraudnet/fraudnet.htm e-mail: fraudnet@gao.gov 1-800-424-5454 (automated answering system)
Between 1987 and 1997, the growth in mental health spending in the United States roughly paralleled the growth in overall health care spending. However, federal mental health spending grew at more than twice the rate of state and local spending. This led to the federal government's share surpassing that of state and local governments, while the share attributable to private sources declined slightly. The ability to care for more people in the community has been facilitated by the continued development of new medications that have fewer side effects and are more effective in helping people manage their illness. Furthermore, treatment approaches, such as assertive community treatment, supported employment, and supportive housing, provide the ongoing assistance that adults with serious mental illness (SMI) often need to function in the community. The Health Care Financing Administration (HCFA) has encouraged the use of community-based services for Medicaid beneficiaries with SMI by disseminating information on the use of new medications and treatment models, which can help people function better in the community. HCFA also supports states' use of Medicaid managed health care services. However, incentives associated with capitated payment can lead to reduced service utilization. HCFA is developing a set of safeguards for people with special health care needs enrolled in Medicaid managed health care and has indicated that it will devise a plan to implement these safeguards, such as through legislative or regulatory action or making changes in Medicaid administrative policies.
The Department of the Navy’s (DON) primary mission is to organize, train, maintain, and equip combat-ready naval forces capable of winning wars, deterring aggression by would-be foes, preserving freedom of the seas, and promoting peace and security. Its operating forces, known as the fleet, are supported by four systems commands. Table 1 provides a brief description of each command’s responsibilities. To support the department’s mission, these commands perform a variety of interrelated and interdependent business functions (e.g., acquisition and financial management), relying heavily on business systems to do so. In fiscal year 2009, DON’s budget for business systems and associated infrastructure was about $2.7 billion, of which about $2.2 billion was allocated to operations and maintenance of existing systems and about $500 million to systems in development and modernization. Of the approximately 2,480 business systems that DOD reports having, DON accounts for 569, or about 23 percent, of the total. Navy ERP is one such system investment. In July 2003, the Assistant Secretary of the Navy for Research, Development, and Acquisition established Navy ERP to converge the functionality of four pilot systems that were under way at the four commands into one system. According to DOD, Navy ERP is to address the Navy’s long-standing problems related to financial transparency and asset visibility. Specifically, the program is intended to standardize the Navy’s acquisition, financial, program management, plant and wholesale supply, and workforce management business processes across its dispersed organizational components, and support about 86,000 users when fully implemented. Navy ERP is being developed in a series of increments using the Systems Applications and Products (SAP) commercial software package, augmented as needed by customized software. SAP consists of multiple, integrated functional modules that perform a variety of business-related tasks, such as finance and acquisition. The first increment, called Template 1, is currently the only funded portion of the program and consists of three releases (1.0, 1.1, and 1.2). Release1.0, Financial and Acquisition, is the largest of the three releases in terms of Template 1 functional requirements. See table 2 for a description of these releases. DON estimates the life-cycle cost for Template 1 to be about $2.4 billion, including about $1 billion for acquisition and $1.4 billion for operations and maintenance. The program office reported that approximately $600 million was spent from fiscal year 2004 through fiscal year 2008. For fiscal year 2009, about $190 million is planned to be spent. To acquire and deploy Navy ERP, DON established a program management office within the Program Executive Office for Executive Information Systems. The program office manages the program’s scope and funding and is responsible for ensuring that the program meets its key objectives. To accomplish this, the program office performs program management functions, including testing, change control, and IV&V. In addition, various DOD and DON organizations share program oversight and review activities. A listing of key entities and their roles and responsibilities is provided in table 3. To deliver system and other program capabilities and to provide program management support services, Navy ERP relies on multiple contractors, as described in table 4. Template 1 of Navy ERP was originally planned to reach full operational capability (FOC) in fiscal year 2011, and its original estimated life-cycle cost was about $1.87 billion. The estimate was later baselined in August 2004 at about $2.0 billion. In December 2006 and again in September 2007, the program was rebaselined. FOC is now planned for fiscal year 2013, and the estimated life-cycle cost is about $2.4 billion (a 31 percent increase over the original estimate). The program is currently in the production and deployment phase of the defense acquisition system, having completed the system development and demonstration phase in September 2007. This was 17 months later than the program’s original schedule set in August 2004, but on time according to the revised schedule set in December 2006. Changes in the program’s acquisition phase timeline are depicted in figure 1, and life-cycle cost estimates are depicted in figure 2. Release 1.0 was deployed at NAVAIR in October 2007, after passing developmental testing and evaluation. Initial operational capability (IOC) was achieved in May 2008, 22 months later than the baseline established in August 2004, and 4 months later than the new baseline established in September 2007. According to program documentation, these delays were due, in part, to challenges experienced at NAVAIR in converting data from legacy systems to run on the new system and implementing new business procedures associated with the system. In light of the delays at NAVAIR in achieving IOC, the deployment schedules for the other commands were revised in 2008. Release 1.0 was deployed at NAVSUP in October 2008 as scheduled, but deployment at SPAWAR was rescheduled for October 2009, 18 months later than planned, and at NAVSEA General Fund in October 2010, and at Navy Working Capital Fund in October 2011, each 12 months later than planned. Release 1.1 is currently being developed and tested, and is planned to be deployed at NAVSUP in February 2010, 7 months later than planned, and at the Navy’s Fleet and Industrial Supply Centers (FISC) starting in February 2011. Changes in the deployment schedule are depicted in figure 3. We have previously reported that DOD has not effectively managed key aspects of a number of business system investments, including Navy ERP. Among other things, our reviews have identified weaknesses in such areas as architectural alignment and informed investment decision making, which are the focus of the Fiscal Year 2005 Defense Authorization Act business system provisions. Our reviews have also identified weaknesses in other system acquisition and investment management areas, such as earned value management, economic justification, risk management, requirements management, test management, and IV&V practices. In September 2008, we reported that DOD had implemented key information technology (IT) management controls on Navy ERP to varying degrees of effectiveness. For example, the control associated with managing system requirements had been effectively implemented, and important aspects of other controls had been at least partially implemented, including those associated with economically justifying investment in the program and proactively managing program risks. However, other aspects of these controls, as well as the bulk of what was needed to effectively implement earned value management, had not been effectively implemented. As a result, the controls that were not effectively implemented had, in part, contributed to sizable cost and schedule shortfalls. Accordingly, we made recommendations aimed at improving cost and schedule estimating, earned value management, and risk management. DOD largely agreed with our recommendations. In July 2008, we reported that DOD had not implemented key aspects of its IT acquisition policies and related guidance on its Global Combat Support System–Marine Corps (GCSS-MC) program. For example, we reported that it had not economically justified its investment in GCSS-MC on the basis of reliable estimates of both benefits and costs and had not effectively implemented earned value management. Moreover, the program office had not adequately managed all program risks and had not used key system quality measures. We concluded that by not effectively implementing these IT management controls, the program was at risk of not delivering a system solution that optimally supports corporate mission needs, maximizes capability mission performance, and is delivered on time and within budget. Accordingly, we made recommendations aimed at strengthening cost estimating, schedule estimating, risk management, and system quality measurement. The department largely agreed with our recommendations. In July 2007, we reported that the Army’s approach for investing about $5 billion in three related programs—the General Fund Enterprise Business System, Global Combat Support System-Army Field/Tactical, and Logistics Modernization Program—did not include alignment with the Army enterprise architecture or use of a portfolio-based business system investment review process. Further, the Logistics Modernization Program’s testing was not adequate and had contributed to the Army’s inability to resolve operational problems. In addition, the Army had not established an IV&V function for any of the three programs. Accordingly, we recommended, among other things, use of an independent test team and establishment of an IV&V function. DOD agreed with the recommendations. In December 2005, we reported that DON had not, among other things, economically justified its ongoing and planned investment in the Naval Tactical Command Support System (NTCSS) and had not adequately conducted requirements management and testing activities. Specifically, requirements were not traceable and developmental testing had not identified problems that, subsequently, twice prevented the system from passing operational testing. Moreover, DON had not effectively performed key measurement, reporting, budgeting, and oversight activities. We concluded that DON could not determine whether NTCSS, as defined and as being developed, was the right solution to meet its strategic business and technological needs. Accordingly, we recommended developing the analytical basis necessary to know if continued investment in NTCSS represented a prudent use of limited resources, and strengthening program management, conditional upon a decision to proceed with further investment in the program. The department largely agreed with our recommendations. In September 2005, we reported that while Navy ERP had the potential to address some of DON’s financial management weaknesses, it faced significant challenges and risks, including developing and implementing system interfaces with other systems and converting data from legacy systems. Also, we reported that the program was not capturing quantitative data to assess effectiveness, and had not established an IV&V function. We made recommendations to address these areas, including having the IV&V agent report directly to program oversight bodies, as well as the program manager. DOD generally agreed with our recommendations, including that an IV&V function should be established. However, it stated that the IV&V team would report directly to program management who in turn would inform program oversight officials of any significant IV&V results. In response, we reiterated the need for the IV&V to be independent of the program and stated that performing IV&V activities independently of the development and management functions helps to ensure that the results are unbiased and based on objective evidence. We also reiterated our support for the recommendation that the IV&V reports be provided to the appropriate oversight body so that it can determine whether any of the IV&V results are significant. We noted that doing so would give added assurance that the results were objective and that those responsible for authorizing future investments in Navy ERP have the information needed to make informed decisions. To be effectively managed, testing should be planned and conducted in a structured and disciplined fashion. According to DOD and industry guidance, system testing should be progressive, meaning that it should consist of a series of test events that first focus on the performance of individual system components, then on the performance of integrated system components, followed by system-level tests that focus on whether the entire system (or major system increments) is acceptable, interoperable with related systems, and operationally suitable to users. For this series of related test events to be conducted effectively, all test events need to be, among other things, governed by a well-defined test management structure and adequately planned. Further, the results of each test event need to be captured and used to ensure that problems discovered are disclosed and corrected. Key aspects of Navy ERP testing have been effectively managed. Specifically, the program has established an effective test management structure, key development events were based on well-defined plans, the results of all executed test events were documented, and problems found during testing (i.e., test defects) were captured in a test management tool and subsequently analyzed, resolved, and disclosed to decision makers. Further, while we identified instances in which the tool did not contain key data about defects that are needed to ensure that unauthorized changes to the status of defects do not occur, the number of instances found are not sufficient to conclude that the controls were not operating effectively. Notwithstanding the missing data, this means that Navy ERP testing has been performed in a manner that increases the chances that the system will meet operational needs and perform as intended. The program office has established a test management structure that satisfies key elements of DOD and industry guidance. For example, the program has developed a Test and Evaluation Master Plan (TEMP) that defines the program’s test strategy. As provided for in the guidance, this strategy consists of a sequence of tests in a simulated environment to verify first that individual system parts meet specified requirements (i.e., development testing) and then verify that these combined parts perform as intended in an operational environment (i.e., operational testing). As we have previously reported, such a sequencing of test events is an effective approach because it permits the source of defects to be isolated sooner, before it is more difficult and expensive to address. More specifically, the strategy includes a sequence of developmental tests for each release consisting of three cycles of integrated system testing (IST) followed by user acceptance testing (UAT). Following development testing, the sequence of operational tests includes the Navy’s independent operational test agency conducting initial operational test and evaluation (IOT&E) and then follow-on operational test and evaluation (FOT&E), as needed, to validate the resolution of deficiencies found during IOT&E. See table 5 for a brief description of the purpose of each test activity, and figure 4 for the schedule of Release 1.0 and 1.1 test activities. According to relevant guidance, test activities should be governed by well-defined and approved plans. Among other things, such plans are to include a defect triage process, metrics for measuring progress in resolving defects, test entrance and exit criteria, and test readiness reviews. Each developmental test event for Release 1.0 (i.e., each cycle of integrated systems testing and user acceptance testing) was based on a well-defined test plan. For example, each plan provided for conducting daily triage meetings to (1) assign new defe documented criteria, (2) defects in the test management tool, and (3) address other de testing issues. Further, each plan included defect metrics, such as t number of plan specified that testing was not complete until all major defects found during the cycle were resolved, and all unresolved defects’ impact on the olding next test event were understood. Further, the plans provided for h test readiness reviews to review test results as a condition for proceeding to the next activities in activities no increasing t and perform as inten cts a criticality level using record new defects and update the status of old defects found and corrected and their age. In addition, each event. By ensuring that plans for key clude these aspects of effective test planning, the risk of test t being effectively and efficiently p erformed is reduced, thus he chances that the system will ded. According to industry guidance, effective system testing includes capturing, analyzing, resolving, and disclosing to decision makers the status of problems found during testing (i.e., test defects). Further, this guidance states that these results should be collected and stored accordin to defined procedures and placed under appropriate levels of control to ensure that any changes to the results are fully documented. To the program’s credit, the relevant testing organizations have documented test defects in accordance with defined plans. For example , daily triage meetings involving the test team lead, testers, and functional experts were held to review each new defect, assign it a criticality level, and designate someone responsible for resolving it and for monitoring and updating its resolution in the test management tool. Further, test readines reviews were conducted at which entrance and exit criteria for each key test event were evaluated before proceeding to the next event. As p art of these reviews, the program office and oversight officials, command representatives, and test officials reviewed the results of test events to ensure, among other things, that significant defects were closed and that there were no unresolved defects that could affect execution of the next test event. However, the test management tool did not always contain key data recorded defects that are needed to ensure that unauthorized changes to the status of defects do not occur. According to information systems auditing guidelines, audit tools should be in place to monitor user access to systems to detect possible errors or unauthorized changes. For Navy ERP, this was not always the case. Specifically, while the tool capability to track changes to test defects in a history log, our analysis of 80 randomly selected defects in the tool disclosed two instances the tool did not record when a change in the defect’s status was made or who made the change. In addition, our analysis of 12 additional defects that were potential anomalies disclosed two additional instances where the tool did not record when a change was made and who made it. While our sample size and results do not support any conclusions as to the overall effectiveness of the controls in place for recording and tracking test defect status changes, they do show that it is possible that changes es. can be made without a complete audit trail surrounding those chang After we shared our results with program officials, they stated that they provided each instance for resolution to the vendor responsible for the tracking tool. These officials attributed these instances to vendor updates to the tool t hat caused the history settings to default to “off.” To address this weakness, they added that they are now ensuring that the history log are set correctly after any update to the tool. This addition is a positive step because without an effective information system access audit tool, the probability of test defect status errors or unauthorized changes is increased. Industry best practices and DOD guidance recognize the importance of system change control when developing and maintaining a system. Once the composition of a system is sufficiently defined, a baseline configuration is normally established, and changes to that baseline are placed under a disciplined change control process to ensure that unjustified and unauthorized changes are not introduced. Elements of disciplined change control include (1) formally documenting a change control process, (2) rigorously adhering to the documented process, and (3) adopting objective criteria for considering a proposed change, including its estimated cost and schedule impact. To its credit, the Navy ERP program has formally documented a change control process. Specifically, it has a plan and related procedures that include the made to the system are properly identified, developed, and implemented in a defined and controlled environment. It also is using an automated tool to capture and track the disposition of each change request. Further, it has defined roles and responsibilities and a related decision-making structure for reviewing and approving system changes. In this regard, the program has established a hierarchy of review and approval boards, including a Configuration Control Board to review all changes and a Configuration Management Board to further review changes estimated to require more than 100 hours or $25,000 to implement. Furthermore, a Navy ERP Senior Integration Board was recently established to review and approve requests to add, delete, or change the program’s requirements. In addition, the change control process states that the decisions are to be based on, amon others, the system engineering and earned value management (i.e., co and schedule) impacts the change w number of work hours that will be required to effect the change. Table 7 purpose and scope of the process—to ensure that any changes st ill introduce, such as the estimated provides a brief description of the decision-making authorities and boards and their respective roles and responsibilities. Navy ERP is largely adhering to its documented change control process. Specifically, our review of a random sample of 60 change requests and minutes of related board meetings held between May 2006 and April 20 showed that the change requests were captured and tracked using an automated tool, and they were reviewed and approved by the designated decision-making authorities and boards, in accordance with the program documented process. However, the program has not sufficiently or co cost and schedule impacts of proposed changes. Our analysis of the random sample of 60 change requests, including our review of related board meeting minutes, showed no evidence that cost and schedule impacts were identified or that they were considered. Specifically, we did not see evidence that the cost and schedule impacts of these change requests were assessed. According to program officials, the cost and schedule impacts of each change were discussed at control board meetings. In addition, they provided two change requests to demonstrate this. However, while these change requests did include schedule impact, they did not include the anticipated cost impact of proposed changes. Rather, these two, as well as those in our random sample, included the estimated number of work hours required to implement the change. nsistently considered the Because the cost of any proposed change depends on other factors besides work hours, such as labor rates, the estimated number of work hours is not sufficient for considering the cost impact of a change. I absence of verifiable evidence that cost and schedule impacts were consistently considered, approval authorities do not appear to have b provided key information needed to fully inform their decisions on whether or not to approve a change. System changes that are approved without a full understanding of their cost and schedule impacts could result in unwarranted cost increases and schedule delays. The purpose of IV&V is to independently ensure that program processes and products meet quality standards. The use of an IV&V function is recognized as an effective practice for large and complex system development and acquisition programs, like Navy ERP, as it provides objective insight into the program’s processes and associated work p To be effective, verification and validation activities should be roducts.performed by an entity that is managerially independent of the system development and management processes and products that are being reviewed. Among other things, such independence helps to ensure that the results are unbiased and based on objective evidence. The Navy has not effectively managed its IV&V function because it has not ensured that the contra ctor performing this function is independent of the products and processes that this contractor is reviewing and because it has not ensured that the contractor is meeting contractual requirements. In June 2006, DON awarded a professional support services contract General Dynamics Information Technology (GDIT), to include responsibilities for, among other things, IV&V, program management support, and delivery of releases according to cost and schedule constraints. According to the program manager, the contractor’s IV&Vfunction is organizationally separate from, and thus independent of, th e contractor’s Navy ERP system development function. However, the subcontractor performing the IV&V function is also performing release management. According to the GDIT contract, the release manager is responsible for developing and deploying a system release that meets operational requirements within the program’s cost and schedule constraints, but it also states that the IV&V function is resp supporting the government in its review, approval, and acceptance of Navy RP products (e.g., releases). The contract also states that GDIT is eligible E for an optional award fee payment based on its performance in meeting, among other things, these cost and schedule constraints. Because performance of the system development and management role ma contractor potentially unable to render impartial assistance to the government in performing the IV&V function, the contractor has an inherent conflict of interest relative to meeting cost and schedule commitments and disclosing the results of verification and validation reviews that may affect its ability to do so. The IV&V function’s lack of independence is amplified by t reports directly and solely to the program manager. As we have previously reported, the IV&V function should report the issues or weaknesses that increase the risks associated with the project to program oversight officials, as well as to program management, to better ensure that the verification and validation results are objective and that the officials responsible for making program investment decisions are fully informed . Furthermore, these officials, once informed, can ensure that the issues or weaknesses reported are promptly addressed. Without ensuring sufficient managerial independence, valuable information may not reach decision makers, potentially leading to the release of a system that does not adequately meet users’ needs and o as intended. Beyond the IV&V function’s lack of independence, the program office h not ensured that the subcontractor has produced the range of deliv that were contractually required and defined in the IV&V plan. For example, the contract and plan call for weekly and monthly reports identifying weaknesses in program processes and recommendations for improvement, a work plan for accomplishing IV&V tasks, and associated assessment reports that follow the System Engineering Plan and program schedule. However, the IV&V contractor has largely not delivered these products. Specifically, until recently, it did not produce a work plan and erables only monthly reports were delivered, and these reports only list meetings that the IV&V contactor attended and documents that it reviewed. They d not, for example, identify program weaknesses or provide recommendations for improvement. According to program officials, they have relied on oral reports from the subcontractor at weekly meetings, and these lessons learned have been incorporated into program guid According to the contractor, the Navy has expended about $1.8 million between June 2006 and September 2008 for IV&V activities, with an additional $249,000 planned to be spent in fiscal year 2009. ance. Following our inquiries about an IV&V work plan, the IV&V contractor developed such a plan in October 2008, more than 2 years after the contract was awarded, that lists program activities and processes to be assessed, such as configuration management and testing. While this does not include time frames for starting and completing these assessments, meeting minutes show that the status of assessments ha been discussed with the program manager during IV&V review meetings. The first planned assessment was delivered to the program in March 2009 he program’s configuration and provides recommendations for improving t management process, such as using the automated tool to produce certain reports and enhancing training to understand how the tool is use Further, program officials stated that they have also recently begun requiring the contractor to provide formal quarterly reports, the first of which was de of this quarterly report shows that it provides recommendations for improving the program’s risk management process and organizational change management strategy. d. livered to the program manager in January 2009. Our review Notwithstanding the recent steps that the program office has taken, nevertheless lacks an independent perspective on the program’s products and management processes. DOD’s successes in delivering large-scale business systems, such as Navy ERP, are in large part determined by the extent to which it employs the kind of rigorous and disciplined IT management controls that are reflected in department policies and related guidance. While implementing these controls does not guarantee a successful program, it does minimize a program’s exposure to risk and thus the likelihood that it will fall short of expectations. In the case of Navy ERP, living up to expectations is important because the program is large, complex, and critical to addressing the department’s long-standing problems related to financial transparency and asset visibility. The Navy ERP program office has largely implemented a range of effective controls associated with system testing and change control, including acting quickly to address issues with the audit log for its test manage tool, but more can be done to ensure that the cost and sched proposed changes are explicitly documented and considered when decisions are reached. Moreover, while the program office has contract for IV&V activities, it has not ensured that the contractor is indepen the products and processes that it is to review and has not held the contractor accountable for producing the full range of IV&V deliverables required under the contract. Moreover, it has not ensured that it s IV&V contractor is accountable to a level of management above the program office, as we previously recommended. Notwithstanding the program office’s considerable effectiveness in how it has managed both system testing and change control, these weaknesses increase the risk of investing in system changes that are not economically justified and unnecessarily limit the value that an IV&V agent can bring to a program like Navy E By addressing these weaknesses, the department can better ensure t taxpayer dollars are wisely and prudently invested. RP. To strengthen the management of Navy ERP’s change control process, recommend that the Secretary of Defense direct the Secretary of the Navy, through the appropriate chain of command, to (1) revise the Navy ERP procedures for controlling system changes to explicitly require that a proposed change’s life-cycle cost impact be estimated and considered in making change request decisions and (2) capture the cost and schedule impacts of each proposed change in the Navy ERP automated change c ontrol tracking tool. To increase the value of Navy ERP IV&V, we recommend that the Secretary of Defense direct the Secretary of the Navy, through the appropriate chain of command, to (1) stop performance of the IV&V function under the existing contract and (2) engage the services of a n IV&V agent that is independent of all Navy ERP management, development, testing, and deployment activities that it may review. In addition, we reiterate our prior recommendation relative to ensur the Navy ERP IV&V agent report directly to program oversight officials, while concurrently sharing IV&V results with the program office. In written comments on a draft of this report, signed by the Assistan Deputy Chief Management Officer and reprinted in appendix II, the department concurred with our recommendations, and stated that it will take the appropriate corrective actions within the next 7 months. We are sending copies of this report to interested congressional committees; the Director, Office of Management and Budget; the Congressional Budget Office; and the Secretary of Defense. The report also is available at no charge on our Web site at http://www.gao.gov. If you or your staffs have any questions on matters discussed in this report, please contact me at (202) 512-3439 or hiter@gao.gov. Contact points for our Offices of Congressional Relations and be found on the last page of this report. GAO staff who made major c ontributions to this report are listed in appendix III. Our objectives were to determine whether (1) system testing is being effectively managed, (2) system changes are being effectively con and (3) independent verification and validation (IV&V) activities a effectively managed for the Navy Enterprise Resource Planning (ERP) program. To determine if Navy ERP testing is being effectively managed, we reviewed relevant documentation, such as the Test and Evaluation Mas Plan and test reports and compared them with relevant federal and related res guidance. Further, we reviewed development test plans and procedu for each test event and compared them with best practices to determine whether well-defined plans were developed. We also examined test results and reports, including test readiness review documentation and compared them against plans to determine whether they had been executed in accordance with the plans. Moreover, to determine the extent to which test defect data were being captured, analyzed, and reported, we inspected 80 randomly selected defects from a sample of 2,258 def program’s test management system. In addition, we re logs associated with each appropriate levels the results were fully documented. Th percent tolerable error rate at the 95 pe we found 0 problems in the error rate was less than 4 percent. In addition, we interviewed cognizant officials, including the program’s test lead and the Navy’s independent operational testers, about their roles and responsibilities for test management. of these 80 defects to determine whether of control were in place to ensure that any changes to is sample was designed with a 5 rcent level of confidence, so that, if our sample, we could conclude statistically that To determine if Navy ERP changes are being effectively controlled, we reviewed relevant program documentation, such as the change control policies, plans, and procedures, and compared them with relevant federal and industry guidance. Further, to determine the extent to which the program is reviewing and approving change requests according to its documented plans and procedures, we inspected 60 randomly selected change requests in the program’s configuration management system. In addition, we reviewed the change request forms associated with these 60 change requests and related control board meeting minutes to determine whether objective criteria for considering a proposed change, including estimated cost or schedule impacts, were adopted. In addition, we interviewed cognizant officials, including the program manager and systems engineer, about their roles and responsibilities for reviewing, approving, and tracking change requests. To determine if IV&V activities are being effectively managed we revie Navy ERP’s IV&V contract, strategy, and plans and compared them with relevant industry guidance. We also analyzed the contractual relationships relative to legal standards that govern organizational conflict of interest . In addition, w assessment report, and a quarterly report, to determine the extent to which contract requirements were met. We interviewed contractor a nd program officials about their roles and responsibilities for IV&V and to determine the extent to which the program’s IV&V function is independent. e examined IV&V monthly status reports, work plans, an We conducted this performance audit at Department of Defense offices in the Washington, D.C., metropolitan area; Annapolis, Maryland; and Norfolk, Virginia; from August 2008 to September 2009, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the individual named above, key contributors to this report were Neelaxi Lakhmani, Assistant Director; Monica Anatalio; Carl Barden; Neil Doherty; Cheryl Dottermusch; Lee McCracken; Karl Seifert; Adam Vodraska; Shaunyce Wallace; and Jeffrey Woodward.
The Department of Defense (DOD) has long been challenged in effectively implementing key acquisition management controls on its thousands of business system investments. For this and other reasons, GAO has designated DOD's business systems modernization efforts as high-risk since 1995. One major business system investment is the Navy's Enterprise Resource Planning (ERP) system. Initiated in 2003, it is to standardize the Navy's business processes, such as acquisition and financial management. It is being delivered in increments, the first of which is to cost about $2.4 billion over its 20-year useful life and be fully deployed by fiscal year 2013. To date, the program has experienced about $570 million in cost overruns and a 2-year schedule delay. GAO was asked to determine whether (1) system testing is being effectively managed, (2) system changes are being effectively controlled, and (3) independent verification and validation (IV&V) activities are being effectively managed. To do this, GAO analyzed relevant program documentation, traced random samples of test defects and change requests, and interviewed cognizant officials. The Navy has largely implemented effective controls on Navy ERP associated with system testing and change control. For example, it has established a well-defined structure for managing tests, including providing for a logical sequence of test events, adequately planning key test events, and documenting and reporting test results. In addition, it has documented, and is largely following, its change request review and approval process, which reflects key aspects of relevant guidance, such as having defined roles and responsibilities and a hierarchy of control boards. However, important aspects of test management and change control have not been fully implemented. Specifically, the program's tool for auditing defect management did not always record key data about changes made to the status of identified defects. To its credit, the program office recently took steps to address this, thereby reducing the risk of defect status errors or unauthorized changes. Also, while the program office's change review and approval procedures include important steps, such as considering the impact of a change, and program officials told GAO that cost and schedule impacts of a change are discussed at control board meetings, GAO's analysis of 60 randomly selected change requests showed no evidence that cost and schedule impacts were in fact considered. Without such key information, decision-making authorities lack an adequate basis for making informed investment decisions, which could result in cost overruns and schedule delays. The Navy has not effectively managed its IV&V activities, which are designed to obtain an unbiased position on whether product and process standards are being met. In particular, the Navy has not ensured that the IV&V contractor is independent of the products and processes that it is reviewing. Specifically, the same contractor responsible for performing IV&V of Navy ERP products (e.g., system releases) is also responsible for ensuring that system releases are delivered within cost and schedule constraints. Because performance of this system development and management role makes the contractor potentially unable to render impartial assistance to the government in performing the IV&V function, there is an inherent conflict of interest. In addition, the IV&V agent reports directly and solely to the program manager and not to program oversight officials. As GAO has previously reported, the IV&V agent should report the findings and associated risks to program oversight officials, as well as program management, in order to better ensure that the IV&V results are objective and that the officials responsible for making program investment decisions are fully informed. Furthermore, the contractor has largely not produced the range of IV&V deliverables that were contractually required between 2006 and 2008. To its credit, the program office recently began requiring the contractor to provide assessment reports, as required under the contract, as well as formal quarterly reports; the contractor delivered the results of the first planned assessment in March 2009. Notwithstanding the recent steps that the program office has taken, it nevertheless lacks an independent perspective on the program's products and management processes
Under federal statutes, executive orders, and department-level guidance, DOD is to meet various renewable energy goals. Statutory goals include the following: Production. DOD is to adopt the goal to produce or procure not less than 25 percent of the total quantity of facility energy it consumes within its facilities from renewable sources beginning in fiscal year 2025. DOD can meet this goal by producing electricity using renewable sources on its installations or by procuring electricity produced using renewable sources that is produced in other locations. Consumption. To the extent economically feasible and technically practicable, not less than 7.5 percent of electrical energy consumed by federal agencies is to come from renewable sources beginning in fiscal year 2013. According to federal guidance implementing the Energy Policy Act of 2005, to count toward the consumption goal, DOD must possess renewable energy credits for electricity it consumes. Executive Order 13693 established additional goals, including directing agency heads to ensure that increasing percentages of electrical energy consumed in buildings be renewable electric energy where cost-effective, beginning with 10 percent in 2016 and climbing to at least 30 percent by fiscal year 2025. In addition, the military departments have also taken steps to encourage renewable energy, and each has issued department-level guidance to develop 1 gigawatt of renewable energy—Air Force by 2016, Navy by 2020, and Army by 2025. The military departments have also established some unique energy goals. For example, the Secretary of the Navy established a goal to derive at least 50 percent of shore-based energy requirements from alternative sources, including renewable energy, by 2020. In addition, in its energy strategy, the Army established a goal to increase its use of renewable or alternative resources for power and fuel use. To meet these goals, over a number of years, DOD has taken steps to develop renewable energy projects on its installations. Additionally, Congress requires that DOD report information on its progress toward these and other energy goals in its annual energy management report. DOD’s most recent report identifies more than 1,130 operational projects of varying generating technologies and capacities. In addition to its renewable energy goals, DOD has also identified renewable energy projects as a possible way to contribute to its energy security objective. In particular, DOD has noted that its installations and missions can be vulnerable to disruptions of the commercial electricity grid and that renewable energy, combined with energy storage and other tools, can allow installations to maintain critical operations without electricity from outside the installations. To develop renewable energy projects, DOD can either directly fund the construction or development of projects or work with private developers to help initially finance them. To directly develop renewable energy projects, DOD typically uses funds provided through its annual appropriations process—referred to in this report as up-front appropriated funding. Otherwise, DOD can finance projects through agreements with private developers and pay back the costs of the projects over time—referred to as alternative financing mechanisms. In addition, when developing projects with private developers, DOD may use one of three types of land use agreements to provide developers with use of DOD land. Through such agreements, DOD allows developers the use of its land in exchange sometimes for revenues or in-kind consideration. Each type of land use agreement has different requirements for compensation for the use of DOD land, as follows: Leases. Under 10 U.S.C. § 2667, the secretary of a military department (or the Secretary of Defense in certain contexts) may lease land in exchange for the payment of cash or in-kind consideration in an amount that is not less than the fair market value of the lease interest, as determined by the secretary. Easements. Under 10 U.S.C. § 2668, the secretary of a military department may provide an easement for rights-of-way, upon terms that the secretary considers advisable, but is not required to include a cash or in-kind consideration. Access licenses or permits. Depending on the structure of the agreement, DOD may provide contractors a license or permit, which allows access to and use of a site for the purposes of the contract, without compensation. According to DOD officials and documents, in recent years, DOD’s approach emphasized developing larger projects and working with private developers to develop renewable energy projects with a generating capacity of greater than 1 megawatt on DOD installations in the United States. DOD used alternative financing mechanisms—that is, financing the initial capital investments in projects with private funding—to facilitate working with private developers. Nonetheless, DOD also directly developed some of these projects using up-front appropriated funds. According to DOD officials and documents, the department has emphasized generally larger renewable energy projects—such as those greater than 1 megawatt—and working with private developers. In 2012, DOD testified before Congress that it planned to emphasize the development of large-scale renewable energy projects with private developers. In 2011, the Army began an initiative focusing on large-scale renewable energy, and in 2014, it established the Office of Energy Initiatives and issued supporting guidance for developing large-scale projects with private developers. According to the guidance, the Army forms relationships with project developers, utilities, and the renewable energy industry and leverages these relationships to identify, develop, and finance projects across its installations. Likewise, in 2014, the Navy established the Renewable Energy Program Office to provide a centralized Navy and Marine Corps approach to developing large-scale renewable energy with private developers. According to a Marine Corps official, there has been a shift toward larger projects in recent years, and the Marine Corps’ strategy for renewable energy will be to finance large- scale projects through private developers. Similarly, according to Air Force officials, the Air Force has been shifting its emphasis toward developing large-scale renewable energy projects with private developers, in part to avoid committing DOD resources to the ownership or operation of renewable energy projects. In March 2016, the Air Force announced the establishment of its Office of Energy Assurance to focus on developing large-scale renewable and other energy projects with private developers. DOD officials told us that the recent focus on pursuing larger projects offers some key advantages. For example, officials said that the increasing emphasis on larger projects offers better opportunities to more efficiently reach DOD’s renewable energy goals and that projects that generate more electricity allow the installations to obtain larger amounts of renewable electricity to apply toward energy goals. According to DOD officials, because of recognition that larger projects can sometimes be more cost-effective, among other reasons, DOD has pursued projects that are larger than 1 megawatt, such as those 10 megawatts and greater. Ten of the 17 projects in our sample were 10 megawatts or larger. According to DOD officials and our prior work, working with private developers when developing renewable energy projects offers several advantages for DOD, including the following: Access to incentives. According to DOD officials, private developers can obtain federal, state, and local tax incentives, which can significantly lower their overall costs of developing renewable energy. These incentives are not generally available to DOD if it develops projects on its own. In particular, the federal government offers certain incentives, such as tax credits to encourage the development of renewable energy, but while private developers may claim these by filing tax returns, DOD cannot claim them because it does not pay federal income taxes. Access to capital. Private developers can arrange their own funding for developing and constructing projects, which allows DOD to avoid seeking up-front appropriated funds. DOD officials told us that obtaining up-front appropriated funds for developing large-scale renewable energy projects can be difficult. Large renewable energy projects such as these can cost several million dollars. As we found in our April 2012 report, obtaining appropriations to finance projects can take longer than developing projects with alternative financing mechanisms. In that report, DOD officials told us that it can take 3 to 5 years to navigate the programming and budgeting process and to obtain military construction appropriations for the project. Up-front appropriation funding through the Energy Conservation Investment Program can also be difficult to obtain. Air Force officials told us that renewable energy projects over 1 megawatt would generally have a difficult time competing for Energy Conservation Investment Program funding against other types of energy conservation measures. Better asset management. According to DOD officials and our previous work, working with private developers allows DOD to leverage private companies’ expertise in developing and managing of projects and limits the number of personnel DOD has to commit to projects. Better risk management. According to prior work and military department officials, private developers can be held responsible for development and operational risks, depending on the contract terms. Previous reports and DOD officials we interviewed also identified drawbacks to entering into agreements with private developers, including the following: The federal government incurs the cost of some incentives used to develop projects on DOD installations. Many of the financial incentives private developers use, such as federal tax credits, are paid for by other parts of the federal government, such as the Department of the Treasury. As we found in an April 2015 report, incentives for renewable energy projects like those in our sample have collectively cost taxpayers $13.7 billion in tax expenditures, such as tax credits, and an additional $16.8 billion in grants provided in lieu of tax expenditures from fiscal year 2004 through 2013. Because DOD’s analysis of cost-effectiveness solely focuses on the costs DOD incurs, these costs to the government are not included in DOD’s decision-making process. As a result, projects using such incentives may be more expensive to the government than the cost that DOD estimates it will incur on its own. In its comments on our draft report, DOD stated that in federal procurement, it is the norm for a business case to address the cost to the agency, not to the entire government. Private financing of projects can increase overall cost. As we reported in a December 2004 report, and more recently in an April 2012 report, financing projects through private developers may be more expensive over time than using up-front appropriations because the federal government’s cost of capital is lower than that of the private sector. Working with private developers can require significant DOD expertise. Army officials told us that working with private developers can require staff to help the developers understand specific requirements for development on installations. In particular, developing projects inside installations involves a complex combination of financing, regulatory requirements, ensuring that the projects are compatible with the installations’ military missions, and other needs that require DOD expertise. DOD can face challenges in completing work to meet external deadlines. Air Force officials said that renewable energy projects incorporate a number of processes, including environmental reviews, procurement, renewable energy analysis, and real estate valuations. In some cases, these processes must be pursued concurrently to work within a time frame that is reasonable to successfully reach agreements with the private sector. Also, according to information provided by the Army, completing these processes in a timely manner can be important because projects with private developers may face a variety of external deadlines to remain viable, such as those imposed by lenders for private parties, when they obtain their own financing, or those to obtain organizational approval or timely access to incentives. DOD officials and documentation identified a range of alternative financing mechanisms DOD has used to work with private developers, singly or in combination, in developing renewable energy projects, including the following: Power purchase agreement (PPA). An agreement negotiated between DOD and an energy supplier to purchase specified quantities of electricity at specified prices for a specific period of time. PPAs may be short term, 10 years or less, or long term, typically up to 30 years. Revenues developers receive under PPAs can be used to repay the costs of constructing and operating a renewable energy project on a DOD installation. According to DOD documentation, PPAs are becoming increasingly common. In some cases, these agreements can be used to purchase electricity from projects built on DOD land, but some can involve projects built elsewhere. DOD officials told us long-term PPAs can provide a cost-effective opportunity to repay private developers for the initial costs of building and the ongoing costs of operating these facilities. Enhanced use lease. A long-term lease of property to a private developer for uses including the installation of renewable energy systems in exchange for cash or in-kind services. These leases are usually for 25 years or more, up to 50 years. In many cases, enhanced use leases do not include a specific provision to purchase electricity produced from the project. According to DOD documentation, DOD is increasingly using enhanced use leases, enabling installations to obtain revenue for the value of DOD land by leasing property to private developers for long periods, such as 50- years terms. In contrast to PPAs, which provide DOD with potential financial benefits through the purchase of electricity and leasing of the land for the project, the financial benefit derived from enhanced use leases is derived through payments received from private developers leasing DOD land for the project. General Services Administration (GSA) areawide contract. A preexisting agreement negotiated between GSA and a local electricity supplier allowing government agencies in specified areas to purchase electricity and other utility services at established terms and conditions. These agreements are limited to no more than 10 years. Similar to PPAs, revenues received under these contracts can be used to repay the local electricity supplier to construct and operate a renewable energy project on a DOD installation. Army officials told us they have used GSA areawide contracts when PPAs are not economically viable or not allowed under state regulations. Army officials said that under some conditions, these types of agreements can be the easiest and fastest mechanisms for contracting for renewable energy projects because they extend existing GSA areawide contracts for the purchase of electricity from the existing supplier and merge this contract extension with an agreement with the local utility for the construction of a renewable energy project. According to Army officials, these contract extensions sometimes provide no cost savings because the purchase price of the electricity is unchanged, but the renewable energy projects may provide military installations other benefits such as providing a step toward obtaining energy security by building the renewable energy project on the installation. Energy savings performance contract (ESPC). A contract with private companies to pursue installation of energy savings measures, such as more efficient equipment and renewable energy, where the savings are used to pay for the measures. In many cases, a single contract can combine multiple energy savings measures and can last for up to 25 years. Utility energy service contract (UESC). A contract with a local utility to provide energy management services focused on energy efficiency or demand reduction, such as designing and installing renewable energy projects. These agreements have typically not exceeded 10 years. DOD officials told us that DOD has not emphasized some alternative financing mechanisms because they pose difficulties; see the following examples: Short terms. Short-term PPAs and UESCs are difficult to contract at prices competitive with existing electricity sources because of their short terms—no more than 10 years. For example, Navy officials told us that a 10-year—rather than a 25-year—PPA for the Hawaiian project would have resulted in the developer setting an unacceptably high electricity rate compared to electricity from the existing supplier. Army officials told us that short-term UESCs are mostly used for small projects because, except in some special cases, it may not be possible to develop larger projects—those greater than 1 megawatt— that can be cost-effective within the required 10-year payback period. According to Army officials, the 1.9-megawatt solar photovoltaic project at Fort Campbell, Kentucky—a larger UESC project—was possible only because a $3 million grant from the state made the project cost-effective. Access to incentives. Some ESPCs and UESCs may not allow private developers to capture federal tax incentives because Internal Revenue Service rules stipulate that only owners of the projects or those meeting certain standards are eligible to claim key tax expenditures. According to Army officials, the Army has structured ESPCs to allow private developers to capture federal incentives by owning the embedded renewable energy projects, but it stopped doing so after a 2012 Office of Management and Budget memorandum required government ownership of such renewable energy projects to avoid obligating the full cost of the project when the contract is signed. DOD officials told us that they believed that developing renewable energy projects with private developers requires appropriate agreements that balance the interests of the federal government with the developers’ interests. To do this, DOD typically negotiates land use and other agreements with private developers. These agreements can be complex. Some agreements may address ownership of the assets of the project. For example, one agreement we reviewed immediately assigned ownership of the project to the Army, whereas some other agreements assigned initial ownership of the project to the private developer with provisions to potentially convert ownership over to the Army after a specific period of time. In addition to using alternative financing mechanisms, DOD used traditional financing methods, such as up-front appropriated funds, to develop some projects. According to DOD guidance, appropriations can be an important source of funding for energy projects. In fiscal year 2014, DOD obligated about $99 million for 130 renewable energy projects. According to a DOD report, DOD generally uses appropriated funds for small-scale projects but in some cases has used them to develop projects over 1 megawatt. Unlike projects developed using alternative financing mechanisms, projects developed using appropriated funding are generally owned by DOD and built on DOD land and, as such, do not require the negotiation of financing and land use agreements. DOD officials identified several sources of up-front appropriated funds for funding renewable energy projects over 1 megawatt. For example, officials identified potential sources to include funds made available through annual military construction appropriations. Another key source of funding officials identified within the military construction account is the Energy Conservation Investment Program. This program has historically received annual appropriations to fund energy conservation and renewable energy, among other things. According to DOD guidance, the amount of annual awards made depends on funding and DOD priorities, among other things. In fiscal year 2015, $160 million was provided to the program—$150 million for projects and $10 million for planning and design. Proposals for Energy Conservation Investment Program projects undergo a multistep selection process, beginning with DOD guidance outlining its priorities. DOD components, including the military departments then develop military construction proposals and cost analyses based on this guidance. Similarly, DOD officials noted that the department can also fund renewable energy projects with funds provided through annual operation and maintenance appropriations, (subject to certain limitations). DOD officials also cited other funding that Congress may periodically provide, such as funding appropriated through the American Recovery and Reinvestment Act of 2009, which could be used. DOD used various approaches to analyze the financial costs and benefits of the 17 renewable energy projects we reviewed and determined that they were generally cost-effective. However, the project documentation DOD developed for the officials responsible for approving these projects did not always clearly identify the value of land used for the projects and in turn the compensation the department received for the land. In addition, key differences in DOD’s analyses and documentation for projects incorporating long-term PPAs raise questions about the information available to approving officials about projects’ estimated costs and benefits. DOD used various approaches to determine that of the 17 projects we reviewed, 12 were cost-effective in producing electricity. DOD conducts business case analyses of potential renewable energy projects to determine whether they met DOD’s policy of encouraging investment in cost-effective renewable energy sources. In general, to do these analyses, DOD officials told us that DOD compares the estimated cost of the electricity from these projects over each project’s life or its contract terms with the estimated cost of continuing to purchase electricity from existing suppliers. If the estimated cost of purchasing electricity from a project is equal to or lower than the cost of continuing to purchase electricity from existing suppliers, DOD determines that the project is cost-effective, according to these officials. Figure 1 shows the locations, technologies and other information about the 17 projects in our sample. Because of the differences in the ways these 17 selected projects were financed, DOD officials told us that they used various approaches to estimate electricity costs in their analyses. Specifically: For 9 of the projects, including 7 projects developed using long-term PPAs and 1 using a short-term PPA—where DOD agreed to purchase specified quantities of electricity from a supplier at specified prices—as well as 1 project developed using an ESPC, DOD estimated the total cost of purchasing electricity from each project by using the developer’s proposed prices for and amount of electricity specified in the contract. DOD then compared this estimate to the cost of purchasing the same amount of electricity from its existing supplier at the prices it estimated the supplier would charge over each year of the term of the contract. For each of the 2 projects developed using GSA areawide contracts— where DOD is granting only the use of its land for the project and will continue to purchase electricity under its existing arrangement with its supplier—DOD officials told us that because there would be no change in its electricity costs, DOD did not undertake a detailed analysis to compare the cost of the project with the cost of continuing to purchase electricity from its existing supplier. For the project developed using an UESC—where DOD would immediately own the project and obtain the electricity generated from the project—DOD compared the amount it would pay for electricity from the project over each year of the 10-year contract term to its estimate of the cost of purchasing the same amount of electricity from its existing supplier at the prices it estimated the supplier would charge during each year of the same 10-year period. For the 2 projects funded through up-front appropriations, DOD developed life cycle cost estimates—that is, estimates of the overall costs of developing, constructing, operating, maintaining, and ultimately disposing of these projects, as well as estimates of the amount of the electricity that would be produced over each year of the projects’ lifetimes. DOD then compared these estimates to the cost of purchasing the same amount of electricity from its existing suppliers at the prices it estimated the supplier would charge over the lifetimes of the projects. For the 3 projects financed using enhanced use leases, DOD did not take steps to evaluate the cost-effectiveness of the electricity purchases since these projects were not designed to provide cost savings from purchasing electricity. Instead, DOD examined whether the leases for these projects provided compensation at least equal to the estimate of fair market value for the land used. For the 14 projects for which DOD evaluated cost-effectiveness, 12 projects were determined to be cost-effective based on electricity 2 projects were determined to be not cost-effective solely based on electricity prices, but DOD pursued them for other reasons. Specifically, DOD pursued a project at Fort Campbell, Kentucky, because, according to Army officials, while the project would not be cost- effective over the 10-year term of the contract, it would be cost- effective over the estimated 25-year lifetime of the project. DOD pursued a project at Marine Corps Air Station Miramar, California, because according to a Marine Corps project document, it contributed to DOD’s renewable energy goals and energy security objective, which are discussed later in this report. DOD’s business case analyses for the 12 projects it determined to be cost-effective showed a range of estimated cost savings. In some cases, DOD identified instances of relatively high expected cost savings. For example, DOD estimated the project serving Navy and Marine sites in Hawaii—Joint Base Pearl Harbor Hickam and Marine Corp Base Hawaii at Kaneohe Bay and Hawaii Camp Smith—would provide about $75 million in cost savings over 25 years. Other projects were expected to provide modest energy cost savings. For example, DOD estimated less than $100,000 in cost savings over 20 years for a project at Fort Drum, New York. Some projects were deemed cost-effective even if they provided no cost savings because DOD established the threshold for cost-effectiveness for energy cost savings at zero—that is, it considered projects to be cost-effective as long as electricity from the projects would not cost more than electricity from the existing supplier. For example, the Army designed the GSA areawide contracts at Fort Benning, Georgia, and Fort Huachuca, Arizona, to cost the same as continuing to purchase electricity from existing suppliers, thereby meeting the minimum cost- effectiveness threshold. The project documentation DOD developed for the officials responsible for approving the 14 of the 17 projects in our review that involved private developers and land use agreements was not always clear about the value of the land used and the compensation DOD received for granting such use. For these projects DOD used the following three types of land agreements, and compensation received varied widely. Specifically: For the 6 projects that used leases—which require the government to obtain at least fair market value for the leased land—the agreements were structured to obtain cash or in-kind payments that DOD believed met this requirement. For the 3 projects that used easements to grant the use of DOD land—which have less specific requirements regarding compensation—the levels of financial compensation varied, including $1 for the easement provided for the project at Edwards Air Force Base, California, and 2 projects that their documentation indicated they would be obtaining other benefits without specifying the financial value of those benefits. For the other 5 projects that used access licenses or permits to grant the use of DOD land—which do not require compensation—DOD obtained no financial compensation. The project documentation DOD prepared for approving officials for these 14 projects differed in how it presented information about the value of the land used and the compensation DOD received. Specifically: For 6 projects in our sample involving leases, DOD’s project documentation presented information about the value of the land and the compensation the department received in return for granting the lease, but the documentation for 2 out of the 6 projects did not provide a clear comparison of these land values and compensation. For example, the documentation for a project at Nellis Air Force Base, Nevada, included information about the estimated market value of the land but did not clearly explain how the in-kind compensation it received for the land compared with that value. Approving officials agreed to receive in-kind compensation, including an electric substation and two lines to distribute electricity on the base. However, the project documentation did not explain how DOD estimated the value of the substation and additional distribution lines and how that value compared with the market value of the land. For 8 projects in our sample involving other types of agreements, such as easements and access licenses or permits, project documentation did not always include information about the value of the land and the compensation DOD received. In particular, none of the project documentation for the 8 projects where land was granted using land use agreements other than leases included a discussion of how the value of the land compared with the compensation DOD received. For example, the documentation for the project that provided about 120 acres of land at Naval Air Weapons Station China Lake, California, using an access license and a long-term PPA did not discuss the value of the land or compare it with the value of any compensation. Similarly, the documentation for the project that provided over 150 acres of land at Fort Huachuca, Arizona, using an easement and a GSA areawide contract did not provide a comparison of the fair market value of the land with an estimate of the compensation DOD received in return. DOD has guidance for presenting land values in project documentation; however, the guidance does not discuss all types of alternative funding mechanisms currently in use. Because the 2012 Office of the Secretary of Defense policy memorandum on alternative financing mechanisms does not apply to all types of alternative financing mechanisms, it is not certain that those projects to which it does not apply are obtaining the required fair market value for land, either in kind or in cash, required by 10 U.S.C. § 2667. For example, the guidance does not apply to 7 of the 14 selected projects involving alternatively financed mechanisms and land use agreements that we examined. Under Standards for Internal Control in the Federal Government, agencies are to clearly document internal controls and the documentation is to appear in management directives, administrative policies, or operating manuals. While DOD has guidance for some alternative financing mechanisms used to work with private developers, the guidance does not clearly apply to all alternative financing mechanisms. Without modifying its guidance for presenting land values in project documentation to apply to the range of alternative financing mechanisms it has used, DOD does not have reasonable assurance that project documentation for approving officials will be consistent or complete for projects using these kinds of financing mechanisms. In addition, the guidance does not direct project documentation to include a comparison of the value of the land used and the compensation DOD receives for it. Our 2009 cost-estimating guide states that one basic characteristic of a credible cost estimate is the recognition of excluded costs, so that any excluded costs should be disclosed and given a rationale. By clarifying the guidance to direct all project documentation for alternatively financed projects involving land use agreements to include the value of the land, the compensation DOD would receive for it, and how the value of the land compared with the value of the compensation, DOD approving officials would have more information for understanding the financial costs and benefits of a project. This information can be particularly important for approving officials for projects like Fort Huachuca and other GSA areawide contracts, where DOD provides the use of its land but obtains no energy cost savings because the cost of purchasing electricity remains the same. Key differences in how DOD conducts business case analyses for renewable energy projects incorporating long-term PPAs—those with terms of up to 30 years—and how it documents these analyses raise questions about the information available to approving officials about projects’ estimated costs and benefits. First, differences in the assumptions DOD used to estimate electricity prices from existing suppliers could affect DOD’s conclusions about projects’ estimated cost savings. Second, DOD examined but did not consistently document the sensitivity of its estimates for some projects to changes in these assumptions. Third, DOD’s project documentation was not always clear or consistent about how compensation for the use of its land was reflected in its analyses of whether electricity produced by the projects was cost-effective. For the seven projects in our sample involving long-term PPAs, DOD used different sources for the assumptions when it developed its estimates of the cost of continuing to purchase electricity from existing suppliers, and these differences raise questions about the estimated costs and benefits of these projects. Specifically, in developing its estimates of the costs of continuing to purchase energy from existing suppliers, DOD used different sources for assumptions, such as how existing suppliers’ electricity prices may change in the future—known as escalation rates. Escalation rates are a key assumption in these estimates because if the actual escalation rate turns out to be lower than the rate DOD assumed in its analysis, its estimates of electricity prices in future years from existing suppliers would be overstated and make renewable electricity appear more cost-effective than it actually would be. Accordingly, any cost savings associated with purchasing electricity from the project instead of from existing suppliers would have been also overstated. Conversely, if the actual escalation rate turns out to be higher than the rate DOD assumed in its analysis, the estimated electricity prices in future years from existing sources would be understated and make renewable electricity appear less cost-effective than it actually would be. Eleven of the 17 projects we reviewed required DOD to use escalation rates for electricity prices to estimate cost savings. DOD used assumptions in National Institute of Standards and Technology’s guidance for the 4 projects that involved financial mechanisms other than long-term PPAs. However, for 6 of the 7 remaining projects that required the use of escalation rates and involved long-term PPAs, DOD relied on assumptions from sources other than National Institute of Standards and Technology’s guidance. GAO’s 2009 cost-estimating guide highlights the importance of obtaining valid data when preparing credible cost estimates and the need for consistency in how cost estimates are structured. DOD has not issued guidance for preparing cost estimates for projects involving all the financing mechanisms the department uses. For projects relying on up- front appropriated funds, DOD has issued guidance that calls for the use of assumptions stipulated in guidance from the Federal Energy Management Program and National Institute of Standards and Technology—including assumptions about the price of electricity from existing suppliers and escalation rates. However, according to DOD and Federal Energy Management Program officials, neither DOD nor the Federal Energy Management Program has issued guidance for such assumptions for projects that involve long-term PPAs. In the absence of guidance specific to projects involving long-term PPAs, DOD generally undertook special studies to develop assumptions for the analyses we examined, which means that the sources for the assumptions used for long-term PPAs may not be the same. According to DOD officials, they undertook these studies because DOD guidance did not specify the source for escalation rates to use for projects involving long-term PPAs, and DOD wanted to obtain input on developing reasonable estimates to use in its analyses. Differences in the sources for the assumptions it used for escalation rates to estimate the costs of renewable energy projects involving the 7 long- term PPAs in our sample raise questions about the credibility of the estimated costs of these projects. For example, in reviewing the analyses of the projects involving long-term PPAs at Naval Air Weapons Station China Lake, California, and Marine Corps Air Station Miramar, California, we found that DOD used a higher escalation rate than the rate in National Institute of Standards and Technology’s guidance. DOD officials told us that they used the higher rate because industry representatives said the rate in the National Institute of Standards and Technology’s guidance was too low. Using the higher escalation rate made the price of electricity purchased from the renewable energy project appear more competitive compared to the estimated price of electricity from existing suppliers than if they used the rate in guidance from the National Institute of Standards and Technology. Using higher assumptions has, in turn, made the estimated cost savings appear higher. Questions about these projects’ estimated benefits, in turn, raise questions about the information DOD officials relied on when approving these projects. In contrast, 5 other projects in our sample that used an escalation rate followed DOD guidance to use assumptions developed by the National Institute of Standards and Technology. Without guidance for long-term PPAs that identifies the preferred source for assumptions for escalation rates, there is a risk that DOD’s estimates of cost savings could incorporate an escalation rate that is too high or low and DOD does not have a consistent basis for estimating the cost savings of projects developed using different financing mechanisms. If DOD developed guidance for renewable energy projects involving long-term PPAs that calls for consistent sources for assumptions for escalation rates, DOD officials charged with approving projects would have greater assurance that they had credible cost estimates on which to base these decisions and more consistency across projects developed using varied financing mechanisms. Project documentation for the seven projects in our sample that used long-term PPAs did not always include a discussion of how sensitive DOD’s estimates of cost and cost-effectiveness were to changes in key assumptions. Recognizing that changes in key assumptions could affect these estimates, DOD examined a range of potential values for key assumptions used to develop cost estimates for some projects to determine how sensitive they were to changes in these assumptions. These sensitivity analyses generally identified the escalation rate for electricity from existing suppliers as a key uncertainty affecting an estimated project’s cost savings, given the difficulties inherent in predicting electricity prices sometimes decades into the future. However, DOD did not consistently describe the sensitivity analyses it conducted in the project documentation provided to approving officials for three of the seven projects that we examined involving long-term PPAs. For two projects—Davis-Monthan Air Force Base, Arizona, and Marine Corps Air Station Miramar, California—DOD did not include descriptions of the sensitivity analyses that had been conducted. For a third project— the project at Fort Drum, New York—DOD included a description of the sensitivity analysis in the project documentation but did not explain that relatively small changes in its estimates of future electricity prices from the existing source could reverse the estimated cost savings from purchasing project electricity to a loss. DOD’s guidance for business case analyses states that a well- documented sensitivity analysis allows approving officials to understand how much confidence they should have in an analysis’s conclusions—in this case, whether the project will be cost-effective in the future, that is, the credibility of the cost savings estimate. In that regard, DOD guidance is consistent with Office of Management and Budget guidance and our 2009 cost-estimating guide, which identifies the characteristics of a high-quality—that is, reliable—cost estimate. Such an estimate would be credible, well-documented, accurate, and comprehensive, and documenting the estimate, which includes describing the sensitivity analysis, is among the 12 steps in our cost-estimating guide that, if followed correctly, should result in reliable and valid cost estimates that agency management can use for making informed decisions. However, DOD did not always include a description of the sensitivity analyses it conducted in the project documentation provided to approving officials. One reason for this appears to be that DOD’s guidance for projects involving long-term PPAs does not specify how to describe sensitivity analyses in project documentation. Without clarifying in guidance how to describe sensitivity analyses in project documentation, DOD does not have reasonable assurance that DOD staff will consistently document such analyses for projects involving long-term PPAs to show whether changes in key assumptions would affect the conclusion that a project was cost-effective, and that approving officials know how much confidence to have in the cost savings estimate. Project documentation for the seven projects in our sample involving long-term PPAs did not fully reflect all costs to DOD, often excluding the value of DOD land used by the project. DOD guidance on business case analyses calls for cost estimates to be complete, that is, to reflect the full cost of the resources used. However, for six of the seven projects incorporating long-term PPAs, project documentation did not reflect all costs, either because the project did not obtain compensation for the land used or DOD effectively returned compensation received for the land back to the developer, thereby excluding this compensation from the cost of electricity from the project when estimating the cost-effectiveness of the project. Specifically, for the four projects that involved long-term PPAs and used an instrument other than a lease, such as an access license or permit, in project documentation DOD did not include the valuation of the land in its cost estimate or obtain financial compensation for the use of its land. Because DOD was not obtaining financial compensation for the land, the estimated electricity costs for these projects did not reflect the value of DOD land used, helping to make the cost of electricity from the projects more advantageous than from existing suppliers. For these four projects, the discussion about the land used differed in the project documentation. For example, the documentation for a project at Navy and Marine Corps sites in Hawaii clearly stated that the value of the land was not considered when estimating cost savings, whereas the project documentation for the project at Naval Air Weapons Station China Lake, California did not discuss the value of the land in the cost savings estimate. Even for the three projects where DOD received compensation for the use of its land, information in project documentation did not reflect a consistent approach for treating the compensation—which in the case of leases is required to be at least equal to the estimated fair market value of the land—in the cost savings estimate. For two of the projects—the projects at Davis-Monthan Air Force Base, Arizona, and Fort Detrick, Maryland—DOD used the compensation it was to receive for the use of land as a credit to payments it would have made for electricity. This approach had the effect of giving back to the developer the full compensation that had been owed to DOD for the land to reduce the amount DOD owed the developer for electricity. DOD then used the reduced amounts as the costs of electricity from the projects to compare with the costs of purchasing electricity from the existing supplier to determine whether the projects were cost-effective. This approach significantly affects the estimated financial costs of projects, helping to make projects’ electricity appear more financially cost-effective. For example, the Army is committing 67 acres valued at an annual rent of more than $400,000 over a 26-year lease to the Fort Detrick project. Including the fair market rental value of the land would raise the electricity prices of the project and, as a result, significantly reduce the estimated cost savings for project—by about 70 percent—compared to the analysis presented in project documentation where the value of the land was effectively excluded, according to information provided by Army officials. In contrast, for the project at Fort Drum, New York—where DOD obtained compensation equal to fair market value—DOD simply relied on the total cost of purchasing electricity as stipulated in the contract—without reducing this amount by the compensation owed to DOD for use of its land—to compare with the costs of purchasing electricity from the existing supplier, resulting in more accurate estimated cost savings. DOD does not have guidance for long-term PPAs that specifies that DOD cost estimates are to reflect all costs, including the value of land to ensure that DOD analyses consistently treat and document the value of land in the estimated cost of electricity. The 2012 policy memorandum calls for these projects to generally utilize leases and for project documentation to include a statement of the fair market value of land in land use agreements as well as a business case analysis of the electricity purchased. However, this policy memorandum does not specify how to present information on how the value of the land or how any compensation that may have been owed to DOD should be considered when developing analyses of the cost-effectiveness of projects. In particular, this document does not specify how to reflect the value of lands used for projects for which DOD was not compensated. The document also does not specify whether the determination of cost- effectiveness of projects should reflect the total costs for purchasing electricity or whether it is allowable to reduce this amount by treating compensation provided to DOD for granting the use of its land as a credit toward future electricity purchases. Some DOD officials we interviewed did not think obtaining compensation for land involving PPAs benefits the government because such payments would simply increase the price of electricity from a project and make the project look less cost-effective. For projects involving long-term PPAs— where DOD is both buying electricity from the project and providing the use of DOD land on which a developer will install, operate, maintain, and own the project—these officials believed all costs associated with the project would be recovered through payments made by DOD for the electricity produced by the project. As such, DOD officials told us that any compensation provided by the developer for use of DOD land provides no net financial benefit for DOD since it would result in higher DOD payments to the developer. According to these officials, the primary financial benefit of these projects is obtained through energy cost savings. However, not providing information about the full costs of DOD contributions, both in terms of electricity purchases and the value of the land and any compensation, can make electricity from the projects appear more cost-effective than purchasing electricity from existing suppliers. Our 2009 cost-estimating guide states that one basic characteristic of a credible cost estimate is the recognition of all associated costs, so that any excluded costs should be disclosed and given a rationale. Without clarifying guidance on how documentation should present information on all costs of a project, including the value of the land and compensation received for it and in turn how that value and compensation affect the estimated costs and benefits of purchasing electricity from projects involving PPAs, DOD officials approving such projects may lack credible information about costs for those projects. As DOD pursues larger renewable projects on its land, the amount of land used may be larger, more valuable, and committed for longer periods of time and unavailable for other purposes—making this land an increasingly significant project resource. Some of the 17 projects we reviewed advanced DOD’s energy goals and energy security objective, but project documentation was not always clear about how each project was expected to (1) contribute to the department’s production and consumption goals or (2) advance the department’s energy security objective or estimate the value of energy security provided. According to DOD project documentation and the DOD officials we interviewed, all 17 of the renewable energy projects we reviewed contributed to DOD’s renewable energy production goal, and 9 of these projects contributed to DOD’s consumption goal (see table 1). According to information provided by DOD, all of the projects claimed that the energy they produced counted toward DOD’s renewable energy production goal because DOD reporting guidance calls for crediting all renewable energy projects on DOD land as contributing to this goal. However, according to DOD project documentation and officials, 8 of the 17 projects did not contribute to DOD’s consumption goal because the military services did not retain or replace the renewable energy credits associated with the project. Under the Energy Policy Act of 2005, DOD has to retain ownership of these credits to claim the energy produced by these projects toward its energy consumption goal or purchase credits to replace them, but the ownership of these credits is often negotiated as part of the contract to develop the project, according to military department officials. These officials told us that in some locations renewable energy credits can be valuable. In some cases, developers directly use them to meet state requirements. In other cases, developers may be able to sell them to others. In either case, developers retaining these credits can typically offer lower prices for electricity, according to the officials. The military department officials noted that, because the price of renewable energy credits can vary widely across different parts of the country, it is sometimes possible to purchase replacement credits elsewhere in the country at a lower price and allow private developers to retain the credits where a project is developed. Project documentation was not always clear or did not provide information about which of DOD’s energy goals a project was contributing to or important aspects of how that contribution toward goals was supported. For example, the documentation for the project at Camp Lejeune, North Carolina submitted to officials did not reflect that it would not contribute to the consumption goal. In interviews about this project, Navy officials told us that the project did not contribute to the consumption goal because the developer would retain renewable energy credits associated with the project. However, this information was not reflected in project documentation submitted to approving officials. In other cases, project documentation did reflect to which goals a project would contribute but did not reflect important aspects of how that contribution toward goals was supported. For example, project documentation for the renewable energy project at Davis-Monthan Air Force Base, Arizona, reflects that the project is expected to contribute to DOD’s consumption goal. Project documentation did indicate that the developer retained the renewable energy credits associated with this project. However, it did not explain that the Air Force would have to purchase renewable energy credits to claim the energy the project produces toward its consumption goal. Thus approving officials did not have access to all relevant information about the project and its contributions toward the energy goals. Standards for Internal Control in the Federal Government states that information should be recorded and communicated to management and others within the entity who need it and in a form and within a time frame that enables them to carry out their internal control and other responsibilities. Without information in project documentation about the extent to which an individual project contributes toward DOD’s production and consumption goals, it is not clear that approving officials had access to all relevant information about the project before approving it. Further, federal standards for internal control state that internal control and all transactions and other significant events need to be clearly documented; such documentation should be complete and available for inspection; and that documentation is to appear in management directives, administrative policies, or operating manuals. However, DOD’s guidance does not direct that all project documentation should identify the extent to which an individual project will contribute toward the department’s energy goals. Without DOD clarifying in guidance that projects should specify if they are contributing to DOD’s energy goals (i.e., production and consumption), approving officials may approve the development of renewable energy projects without fully understanding the projects’ potential costs and benefits. In particular, DOD officials may unknowingly approve projects that contribute only to DOD’s production goal, thereby rendering its land unavailable for other projects that could have contributed to both its production and consumption goals. The views of DOD officials and documentation for projects in our sample reflected a wide range of perspectives on energy security, but we found that only 2 of the projects were specifically designed to provide power to the installations in the event of a disruption of the commercial grid without additional investments. DOD officials told us that they believed all 17 of the projects in our sample provided an energy security benefit because the officials defined energy security broadly to encompass the diversification of fuel sources, among other things. However, this view was not consistently reflected in the documentation for the 17 projects in our sample. Specifically, of the 17 projects, the documentation for 5 projects either did not identify energy security as a project benefit or stated that the project would not provide an energy security benefit. For example, for a project at Navy and Marine Corps sites in Hawaii, documentation stated that the project would not incorporate energy security features because to do so would be cost prohibitive. In contrast, the documentation for the other 12 projects identified a wide range of potential energy security benefits but did not use consistent definitions of energy security or consistently identify the need for additional investment. For 5 of the 12 projects, the documentation either did not clarify the specific energy security benefit or identified energy security benefits more broadly, such as promoting the use of nonfossil fuels. For example, documentation for the project at Naval Air Weapons Station China Lake, California, identified that the project would reduce reliance on electricity produced by natural gas, a fossil fuel; replace energy purchased from other suppliers; and be located on the installation as the energy security and independence benefits. For the remaining 7 projects, the documentation noted that the projects had the potential of providing power in the event of a commercial grid outage—a narrower definition of energy security benefits. However, we found that only 2 of these projects had the capability to provide electricity to the installation in the event of an outage of the commercial grid without additional steps. Specifically, documentation for a project at Fort Drum, New York, stated that the project would provide access to on-site electricity generation for all of the installation’s energy needs in the event of a grid outage. In addition, Marine Corps officials told us that the project at Marine Corps Logistics Base Albany, Georgia, would provide electricity to the maintenance center—the critical facility on the installation—during a grid outage. The other 5 projects would require additional steps and investments, such as the installation of batteries or other energy storage equipment and the integration of improvements to the electricity delivery and control systems on the installation before they would be able to deliver electricity during a grid outage. For example, documentation for the project at Fort Benning, Georgia, noted that additional infrastructure would be needed to enable use of the energy produced by the project during a grid outage and estimated that this infrastructure would cost an additional $30 million to $40 million. Similarly, documentation for the project at Camp Lejeune, North Carolina, stated that the Department of the Navy would be investing up to $48 million more to achieve the project’s energy security benefits. One project did not provide any information about the additional investment required to provide electricity during a grid outage. Documentation for a project at Marine Corps Air Station Miramar, California, identified an energy security benefit of providing power during an outage of the commercial grid but did not clearly specify what additional investments were required or provide estimates of the costs of those investments. Navy officials told us that since the approval of the project, the Navy has developed a proposal for about $18 million in upgrades that will integrate this project as well as other emergency energy sources to enable it to provide this capability, but these improvements were not included in the project documentation that we examined. Under federal standards for internal control, information should be recorded and communicated to management and others within the entity who need it and in a form and within a time frame that enables them to carry out their internal control and other responsibilities. However, the military departments and services did not consistently record in project documentation the type of energy security benefit projects would provide and whether any such benefit would be immediately available or would require additional investments and, if additional investment was required, provide a detailed estimate of those investments. Without specifying this information, project documentation did not convey a full understanding of the projects’ potential costs and benefits specific to energy security to approving officials. Table 2 describes the extent to which project documentation identified energy security as a benefit and whether additional investment would be needed to achieve this benefit for the 17 projects we reviewed. Moreover, DOD did not consistently estimate or document the value of the energy security benefits associated with the 17 projects we reviewed. For example, the project at Fort Huachuca, Arizona, granted an easement to a private developer to use DOD land in exchange for energy security benefits but did not provide an estimated value for this benefit in documentation for the project. DOD officials we interviewed told us that they estimated the value of the energy security benefit as the developers’ full cost of the project—$46 million. However, it was not clear from the project documentation why the Army valued the energy security benefits as equal to the entire cost of the project. In contrast, for the Navy project at Camp Lejeune, North Carolina, documentation contained the Navy’s estimate of the value of the energy security benefit as the government’s projected cost of alternatively obtaining the same amount of electricity capacity with diesel generators plus the developer’s cost of providing project studies, site preparation, and connection infrastructure, which totaled about $23 million. As we mentioned earlier, under federal standards for internal control, information should be recorded and communicated to management and others within the entity who need it and in a form and within a time frame that enables them to carry out their internal control and other responsibilities. However, DOD used different approaches to estimate the value of the energy security benefit of providing assured access to power during a grid outage and did not consistently record the approach used in project documentation. Without a consistent approach to estimating the value of the energy security benefits and a description of the approach used to estimate that value in project documentation, approving officials may not have reasonable assurance about the value of projects’ energy security benefits. The primary reason for the lack of consistency and completeness in project documentation concerning projects’ contributions to DOD’s energy security objective and the value of the energy security benefits provided is that DOD has not issued guidance on how to document projects’ contributions to its energy security objective. Available guidance does not directly apply to estimating and documenting projects’ energy security benefits. While 10 U.S.C. § 2924 points toward a narrower definition of energy security, specifically the ability to provide power during a disruption of the commercial grid, DOD’s directive that calls for improving energy security, among other things, does not specify how to identify the type of energy security provided by projects or how to otherwise document these contributions. In addition, we were not able to identify any other guidance that directs the military departments and services on how information about the need for additional investment to obtain an energy security benefit should be presented in project documentation. DOD officials we interviewed were also not aware of any guidance on how to value the energy security provided by renewable energy projects. Finally, DOD officials were not able to identify specific documented guidance on valuing energy security that applies to projects relying on energy sources that are intermittent—such as solar sources that vary throughout the day and are unavailable at night. As mentioned earlier, under federal standards for internal control, agencies are to clearly document internal controls and the documentation is to appear in management directives, administrative policies, or operating manuals. In the absence such specific guidance, DOD officials took different approaches to estimating the value. Specifically, with regard to the project at Fort Huachuca, Arizona, Army officials estimated that the energy security value was equal to the cost of the renewable energy project. In contrast, for the project at Camp Lejeune, North Carolina, Navy officials estimated that the value was equal to the cost of obtaining the comparable amount of capacity from a standard technology for providing backup power supplies, in this case backup diesel generators—a technology that can produce specified amounts of energy whenever called upon. It is inherently difficult to estimate the value of energy security. However, it is not clear that either of the two approaches they used—namely, equal to the total cost of the project or equal to the cost of obtaining diesel generators of an equal capacity to produce electricity—is valid for estimating the value of energy security provided by the renewable energy projects in our sample. Officials we interviewed from all three military departments stated that it was difficult to develop such estimates without guidance. For example, Marine Corps and Navy officials discussing the project at Air Ground Combat Center Twentynine Palms, California, told us that they were wary of estimating the value of energy security without specific guidance from DOD on how to estimate such value for renewable energy projects because they were concerned that their valuation would be critiqued. Further, approving Army officials told us that they had an option in the request for proposals for the project at Fort Detrick, Maryland, to consider energy security benefits but did not know how to evaluate them, and thus they did not consider them in the proposals they reviewed. Without guidance for estimating and documenting the contributions of renewable energy projects to DOD’s energy security objective, approving officials may continue to see inconsistent and incomplete project documentation and may approve the development of renewable energy projects without fully understanding the projects’ potential costs and benefits specific to energy security. By emphasizing larger projects and working with private developers, DOD is making strides toward various federal renewable energy goals and its own energy security objective. As DOD has worked more frequently with private developers using alternative financing mechanisms to further its renewable energy goals and energy security objective, its guidance for analyzing the financial costs and benefits of these projects appears to have lagged, particularly for projects involving long-term PPAs for which DOD grants the use of its land. DOD has guidance for presenting land values in project documentation, but the guidance does not discuss all types of alternative funding mechanisms currently in use. As a result, the project documentation DOD prepared for approving officials differed in how it presented information about the value of the land used and the compensation DOD received for the use of its land. Without modifying its guidance for presenting land values in project documentation to apply to the range of alternative financing mechanisms it has used, particularly long-term PPAs, DOD may not have reasonable assurance that project documentation for approving officials is consistent or complete. If DOD clarifies the guidance to direct all project documentation for alternatively financed projects involving land use agreements to include the value of the land, the compensation DOD would receive for it, and how the value of the land compared with the value of the compensation, DOD approving officials would have more information for understanding the financial costs and benefits of a project. Further, for projects involving long-term PPAs, DOD’s guidance provides few specific details for conducting its business case analyses of these projects’ costs and benefits, in particular, the key assumptions that DOD departments, services, and installations use for escalation rates. Differences in the sources DOD used as the basis for assumptions about escalation rates raise questions about the credibility of the estimated costs of projects provided to approving officials. Developing guidance that calls for drawing upon consistent sources for assumptions for escalation rates would provide DOD officials charged with approving renewable energy projects involving long-term PPAs more assurance that they had credible cost estimates on which to base these decisions. In addition, although DOD’s guidance for business case analyses states that a well- documented sensitivity analysis allows approving officials to understand how much confidence they should have in an analysis’s conclusions, DOD’s guidance for renewable energy projects does not specify how to describe sensitivity analyses in project documentation. Without clarifying its guidance on how to describe sensitivity analyses in project documentation, DOD may not have reasonable assurance that it will consistently document such analyses for projects involving long-term PPAs to show whether changes in key assumptions would affect the conclusions that projects were cost-effective. Moreover, DOD does not have guidance for long-term PPAs that specifies that cost estimates reflect all costs, including the value of land that DOD forgoes the use of for renewable energy projects, to ensure that DOD analyses consistently treat and document the value of land in the estimated cost of electricity. Without DOD clarifying its guidance on how documentation should present information on all costs of a project, including the value of the land and compensation received for it and in turn how that value and compensation affect the estimated costs and benefits of purchasing electricity from projects involving PPAs, DOD officials approving such projects may lack credible information about costs for those projects. Finally, limited guidance regarding how to prepare documentation for renewable energy projects has resulted in project documentation that is not always clear as to which projects are contributing toward DOD energy goals and its energy security objective. Without information in project documentation about the extent to which an individual project contributes toward DOD’s production and consumption goals, approving officials may not have access to all relevant information about the project when making decisions before approving it. Regarding energy security, DOD’s project documentation did not always clearly define the energy security benefits associated with projects and whether additional investment would be required to obtain these benefits. If project documentation does not specify the type of energy security benefit projects would provide and whether any such benefit would be immediately available or would require additional investments and, if additional investment was required, provide a detailed estimate of those investments, approving officials may not fully understand the projects’ potential costs and benefits specific to energy security. In addition, lack of guidance on how to value energy security provided by renewable energy projects such as those we reviewed has resulted in inconsistent approaches to estimating the value of the energy security benefits associated with each project. Without a consistent approach to estimating the value of the energy security benefits and a description of the approach used in project documentation, approving officials cannot have reasonable assurance about the value of projects’ energy security benefits. We are recommending that the Secretary of Defense direct the Assistant Secretary of Defense for Energy, Installations and Environment and the Secretaries of the Army, Navy, and Air Force to take the following eight actions: To improve DOD’s analyses of the financial costs and benefits of renewable energy projects, modify guidance for presenting land values in project documentation to apply to the range of alternative financing mechanisms DOD has used and clarify the guidance to direct all project documentation for alternatively financed projects involving land use agreements to include the value of the land, the compensation DOD would receive for it, and how the value of the land compared with the value of the compensation. To improve DOD’s analyses of the financial costs and benefits of renewable energy projects involving long-term PPAs on its land, revise guidance to develop consistent sources for assumptions for escalation; clarify how to describe sensitivity analyses in project documentation; and clarify how project documentation should present information on all costs of a project, including the value of the land and compensation received for it and in turn how that value and compensation would affect the estimated costs and benefits of purchasing electricity from the project (e.g., whether compensation could be used to reduce electricity costs for the project when estimating cost-effectiveness). To improve the information available to approving officials on projects’ contributions to DOD’s renewable energy goals and energy security objective and to help ensure the consistency and completeness of project documentation, develop guidance to clarify that projects should specify their contribution to DOD’s energy production and consumption goals; clarify the type of energy security benefit that projects will provide and state whether any such benefit is immediately available or would require additional investments and, for projects that would require additional investment, provide a detailed estimate of those investments; and clarify that a consistent approach is to be taken to estimate the value of the energy security benefit of providing assured access to power during a grid outage and that a description of this approach is provided in project documentation. We provided a draft of this report to DOD for review and comment. In written comments, reprinted in appendix III, DOD concurred with all of our recommendations. In addition, DOD provided technical comments, which we incorporated as appropriate. We are providing copies of this report to appropriate congressional committees; the Secretary of Defense; the Secretaries of the Army, the Navy, and the Air Force; the Commandant of the Marine Corps; and the Secretary of Energy. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact Brian J. Lepore at (202) 512-4523 or leporeb@gao.gov or Frank Rusco at (202) 512-3841 or ruscof@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff members who made key contributions to this report are listed in appendix IV. The objectives of our review were to examine (1) the Department of Defense’s (DOD) approach for developing renewable energy projects with a generating capacity greater than 1 megawatt (2) DOD’s approach for analyzing the financial costs and benefits of selected renewable energy projects contracted for or funded from 2010 through 2015 and (3) the extent to which selected projects addressed DOD’s renewable energy goals and energy security objective. To address these questions, we examined 17 renewable energy projects built with a generating capacity greater than 1 megawatt on military installations in the United States with funding or contracts awarded from 2010 through 2015. We identified possible projects for examination from lists of approved but not necessarily operational projects and lists of operational projects the military departments provided. Including approved projects that were not necessarily operational enabled us to review more recent projects that are more revealing of DOD’s current efforts and emphasis on larger, alternatively financed projects. We selected projects that reflected a range of military departments and services, funding mechanisms, and renewable energy technologies. Because this was a nonprobability sample, our findings are not generalizable to other DOD renewable energy projects but provide illustrative examples of how DOD develops projects, analyzes costs and benefits, and addresses its goals and objective with such projects. For a complete listing of the projects in our sample, see appendix II. To examine DOD’s approach for developing renewable energy projects with a generating capacity of 1 megawatt and greater, we reviewed applicable laws, DOD guidance for developing renewable energy projects, and DOD’s annual reporting on energy management, and interviewed officials with the Office of the Secretary of Defense and the military departments and services who were knowledgeable about DOD’s development of such projects, including our sample of 17 projects. To examine how DOD analyzed the financial costs and benefits of selected renewable energy projects, we reviewed DOD’s guidance as well as Federal Energy Management Program and National Institute of Standards and Technology guidance for assessing cost-effectiveness of projects and examined whether DOD followed this guidance. We focused on the approaches DOD used to calculate the costs of various sources of energy and estimate cost savings derived from the project electricity, the source of assumptions for the analyses, any compensation from developers for the land used for the project, assessments of uncertainties with its long-term estimates, and the information conveyed in project documentation to approving officials about any government payments or compensation stipulated in project agreements. We reviewed the relevant project documentation for the selected projects, including business case analyses of cost savings and, for alternatively financed projects, the project contracts with developers and any associated agreements to allow developers temporary use of land for the project. We also interviewed key officials with the Office of the Secretary of Defense; military departments and services; specific installations with specific knowledge of projects; and Department of Energy, which provides federal agencies information and support when examining energy projects and related matters. Project documentation DOD provided us was not always clear about all aspects of the estimation process or the source of assumptions; moreover, DOD could not provide documentation for the business case analysis done for 1 of the 17 projects we examined, and we do not report on the estimation process for that project. Based on our interviews to confirm DOD’s estimation process described in project documentation, we determined that the DOD information was reliable for the purposes of examining how DOD determined the costs and benefits of these projects. To examine the extent to which DOD addressed its renewable energy goals and energy security objective through the projects in our sample, we reviewed DOD guidance on renewable energy and energy security. We also reviewed project documentation prepared for project approval, as well as contracts and land use agreements to determine the extent to which renewable energy goal contributions and energy security benefits were identified in project documentation. In addition, to ensure that we reliably identified and understood the contributions to the renewable energy goals and the energy security objective for the projects, we interviewed DOD officials about each project. Based on our comparison of project documentation and interview responses, we determined that the DOD information was reliable for the purposes of examining the extent to which DOD addressed its renewable energy goals and energy security objective through selected renewable energy projects. We conducted this performance audit from March 2015 to September 2016 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: DOD Renewable Energy Projects in GAO Sample Land use agreements refer to the types of ways that DOD provided the developer temporary use of land for a renewable energy project. These agreements include the following: Leases refer to agreements under which the secretary of a military department may lease land in exchange for the payment of a cash or in-kind consideration in an amount that is not less than the fair market value of the lease interest, as determined by the secretary. Easements are agreements under which the secretary of a military department may provide an easement for rights-of-way, upon terms that the secretary considers advisable, which might include a cash or in-kind consideration. Access licenses or permits refer to agreement provisions through which DOD provides contractors access to and use of a site for the purposes of the contract, without compensation. For some projects, where DOD owns the generating system on its own land, providing developers land to use for the project is not applicable. We are identifying the agreement for the site of the generating system and not necessarily for other lands such as for transmission lines, unless otherwise noted. In addition to the contacts named above, Jon Ludwigson (Assistant Director), Laura Durland (Assistant Director), Tracy Barnes, Emily Biskup, Lorraine Ettaro, Emily Gerken, Timothy Guinane, Terry Hanford, Alberto Leff, Alison O’Neill, Jodie Sandel, Kiki Theodoropoulos and Michael Willems made key contributions to this report. Defense Infrastructure: Energy Conservation Investment Program Needs Improved Reporting, Measurement, and Guidance. GAO-16-162. Washington, D.C.: January 29, 2016. Defense Infrastructure: Improvement Needed in Energy Reporting and Security Funding at Installations with Limited Connectivity. GAO-16-164. Washington, D.C.: January 27, 2016. Defense Infrastructure: DOD Efforts Regarding Net Zero Goals. GAO-16-153R. Washington, D.C.: January 12, 2016. Defense Infrastructure: Improvements in DOD Reporting and Cybersecurity Implementation Needed to Enhance Utility Resilience Planning. GAO-15-749. Washington, D.C.: July 23, 2015. Energy Savings Performance Contracts: Additional Actions Needed to Improve Federal Oversight. GAO-15-432. Washington, D.C.: June 17, 2015. Electricity Generation Projects: Additional Data Could Improve Understanding of the Effectiveness of Tax Expenditures. GAO-15-302. Washington, DC: April 28, 2015. High-Risk Series: An Update. GAO-15-290. Washington, D.C.: February 11, 2015. Climate Change Adaptation: DOD Can Improve Infrastructure Planning and Processes to Better Account for Potential Impacts. GAO-14-446. Washington, D.C.: May 30, 2014. Clear Air Force Station: Air Force Reviewed Costs and Benefits of Several Options before Deciding to Close the Power Plant. GAO-14-550. Washington, D.C.: May 12, 2014. Climate Change: Energy Infrastructure Risks and Adaptation Efforts. GAO-14-74. Washington, D.C.: January 31, 2014. Defense Infrastructure: Improved Guidance Needed for Estimating Alternatively Financed Project Liabilities. GAO-13-337. Washington, D.C.: April 18, 2013. Renewable Energy Project Financing: Improved Guidance and Information Sharing Needed for DOD Project-Level Officials. GAO-12-401. Washington, D.C.: April 4, 2012. Renewable Energy: Federal Agencies Implement Hundreds of Initiatives. GAO-12-260. Washington, D.C.: February 27, 2012. Defense Infrastructure: DOD Did Not Fully Address the Supplemental Reporting Requirements in Its Energy Management Report. GAO-12-336R. Washington, D.C.: January 31, 2012. Defense Infrastructure: The Enhanced Use Lease Program Requires Management Attention. GAO-11-574. Washington, D.C.: June 30, 2011. Electricity Grid Modernization: Progress Being Made on Cybersecurity Guidelines, but Key Challenges Remain to be Addressed. GAO-11-117. Washington, D.C.: January 12, 2011. Defense Infrastructure: Department of Defense’s Energy Supplemental Report. GAO-10-988R. Washington, D.C.: September 29, 2010. Defense Infrastructure: Department of Defense Renewable Energy Initiatives. GAO-10-681R. Washington, D.C.: April 26, 2010. Defense Infrastructure: DOD Needs to Take Actions to Address Challenges in Meeting Federal Renewable Energy Goals. GAO-10-104. Washington, D.C.: December 18, 2009. Defense Critical Infrastructure: Actions Needed to Improve the Identification and Management of Electrical Power Risks and Vulnerabilities to DOD Critical Assets. GAO-10-147. Washington, D.C.: October 23, 2009. Energy Savings: Performance Contracts Offer Benefits, but Vigilance Is Needed to Protect Government Interests. GAO-05-340. Washington, D.C.: June 22, 2005. Capital Financing: Partnerships and Energy Savings Performance Contracts Raise Budgeting and Monitoring Concerns. GAO-05-55. Washington, D.C.: December 16, 2004.
By law and executive order, DOD is to pursue goals for the production and consumption of renewable energy. Also, DOD policy calls for investing in cost-effective renewable energy and improving energy security—addressing risks such as disruption of electricity grids serving military installations. The Joint Explanatory Statement for the National Defense Authorization Act for Fiscal Year 2015 included a provision for GAO to examine how DOD determines the costs and benefits of a sample of renewable energy projects. This report examines (1) DOD's approach for developing renewable energy projects with a generating capacity greater than 1 megawatt, (2) DOD's approach for analyzing the financial costs and benefits of selected projects, and (3) the extent to which these projects addressed DOD's renewable energy goals and energy security objective. GAO examined a nongeneralizable sample of 17 projects that reflect a mix of military departments and services, funding mechanisms, and technologies. GAO also examined legal authorities, project documentation, and DOD guidance, and interviewed DOD officials. The Department of Defense (DOD) has emphasized working with private developers using a variety of alternative financing mechanisms—that is, agreements with private developers to pay back the costs of the projects over time—to develop renewable energy projects greater than 1 megawatt. According to DOD officials, DOD works with private developers because doing so gives DOD several advantages. For example, private developers have access to tax incentives that can significantly lower the overall costs of developing projects compared to what those costs would be if DOD developed the projects on its own. DOD used various approaches to analyze the financial costs and benefits of the 17 renewable energy projects GAO examined, but project documentation was not always clear or complete. In particular, project documentation did not always clearly identify the value of land used and compare that to any compensation DOD received. Specifically, for 8 projects, DOD received little or no financial compensation for the use of its land, but the documentation did not clearly compare the value for granting use of DOD land to the value of what DOD received for it. As a result, DOD contributed potentially valuable land—in some cases, over 100 acres—for the development of a project without including this as a cost in project documentation. GAO's 2009 cost-estimating guide states that one basic characteristic of a credible cost estimate is the recognition of excluded costs, so any excluded costs should be disclosed and a rationale provided. However, DOD guidance does not specify that project documentation should include a comparison of the value of land and any compensation received. By clarifying its guidance to call for project documentation to include a comparison of land values and any compensation it would receive, DOD would have greater assurance that its officials have credible information about projects' financial costs and benefits before approving them. Some of the 17 projects GAO reviewed advanced DOD's renewable energy goals and energy security objective (e.g., for access to reliable supplies of energy during an outage of the commercial grid), but project documentation was not always clear about how projects did so. For example, officials told GAO they believe that all the projects contributed to DOD's energy security objective, but this view was not reflected in the documentation for the 17 projects. GAO found that only 2 projects would immediately be able to provide electricity to an installation in the event of a grid outage. Five other projects would require additional investment, such as the installation of batteries or other energy storage, before they would be able to deliver electricity during an outage, and project documentation did not always reflect this information. Under federal standards for internal control, agencies are to record and communicate information to management and others who need it and in a form and within a time frame that enables them to carry out their internal control and other responsibilities. Without clarifying its guidance to call for project documentation to include information about projects' contributions to DOD's energy security objective and any additional investment needed to do so, DOD officials may not have a full understanding of all relevant information when approving renewable energy projects. GAO is making eight recommendations, including that DOD should clarify guidance to call for project documentation to include (1) a comparison of the value of the land used and the compensation DOD is to receive for it and (2) information on projects' contributions toward DOD's energy security objective. DOD fully concurred with GAO's recommendations.
Influenza is associated with an average of more than 200,000 hospitalizations and 36,000 deaths each year in the United States. Most people who get the flu recover completely in 1 to 2 weeks, but some develop serious and life-threatening medical complications, such as pneumonia. People who are aged 65 and older, people of any age with chronic medical conditions, children younger than 2 years, and pregnant women are more likely to get severe complications from influenza than other people. For the 2004-2005 flu season, CDC initially recommended in May 2004 that about 185 million Americans—about 85 million in high-risk groups and over 100 million in other target groups—receive the vaccine, which is the primary method for preventing influenza. Groups at high-risk for flu- related complications included people aged 65 years or older; residents of nursing homes and other chronic-care facilities; people with chronic conditions such as asthma and diabetes; children and adolescents aged 6 months to 18 years who are receiving long-term aspirin therapy; pregnant women; and children aged 6 to 23 months. Other target groups identified in the May 2004 recommendations included persons aged 50 to 64 years and people who can transmit influenza to those at high-risk, such as health care workers, employees of nursing homes, chronic-care facilities, and assisted living facilities, and household contacts of and those who provide home care to high-risk individuals. Not everyone in these high-risk and target groups, however, receives a vaccination each year. For example, based on the 2002 National Health Interview Survey and other sources, CDC estimates that only about 44 percent of individuals at high- risk and about 20 percent of individuals in the other target groups were vaccinated. It takes about 2 weeks after vaccination for antibodies to develop in the body and provide protection against influenza virus infection. CDC recommends October through November as the best time to get vaccinated because the flu season often starts in late November to December and peaks between late December and early March. However, if influenza activity peaks late, vaccination in December or later can still be beneficial. Producing sufficient quantities of influenza vaccine is a complex process that involves growing viruses in millions of fertilized chicken eggs. This process, which requires several steps, generally takes at least 6 to 8 months from January through August each year, so vaccine manufacturers must predict demand and decide on the number of doses to produce well before the onset of the flu season. Each year’s vaccine is made up of three different strains of influenza viruses, and, typically, each year one or two of the strains is changed to better protect against the strains that are likely to be circulating during the coming flu season. The Food and Drug Administration (FDA) and its advisory committee decide which strains to include based on CDC surveillance data, and FDA also licenses and regulates the manufacturers that produce the vaccine for distribution in the United States. In a typical year, manufacturers make flu vaccine available before the optimal fall season for administering flu vaccine. For the 2003-2004 flu season, two manufacturers—one with production facilities in the United States and one with production facilities in the United Kingdom— produced about 95 percent of the vaccine for the United States. A third U.S. manufacturer produces a flu vaccine that is given by nasal spray and is only approved for healthy persons aged 5 through 49 years. This nasal spray vaccine is not recommended for individuals at high risk for flu- related complications. According to CDC, this manufacturer produced about 4 million doses of the nasal spray vaccine for the 2003-2004 season. Flu vaccine production and distribution are largely private-sector responsibilities. Like other pharmaceutical products, flu vaccine is sold to thousands of purchasers by manufacturers, numerous medical supply distributors, and other resellers such as pharmacies. These purchasers provide flu vaccinations at physicians’ offices, public health clinics, nursing homes, and at nonmedical locations such as workplaces and various retail outlets. Millions of individuals receive flu vaccinations through mass immunization campaigns in these nonmedical settings, where organizations such as visiting nurse agencies under contract administer the vaccine. In a typical year, most influenza vaccine distribution and administration are accomplished within the private sector, with relatively small amounts of vaccine purchased and distributed by CDC or by state and local health departments. For the 2004-2005 season, CDC had estimated that about 100 million doses of flu vaccine would be available for distribution through this network. On August 26, 2004, one major manufacturer announced a small quantity of its flu vaccine did not meet sterility specifications and that distribution of its vaccine would be delayed until after further tests were completed. On October 5, 2004, this manufacturer announced that the regulatory body in the United Kingdom, the Medicines and Healthcare Products Regulatory Agency (MHRA), had temporarily suspended the company’s license to manufacture flu vaccine in its facility in Liverpool, England. The manufacturer stated that this action prevented the company from releasing any vaccine for the 2004-2005 flu season—effectively reducing the anticipated U.S. supply by nearly half. This sudden disruption of the supply set off the chain of events the nation has experienced in the past 6 weeks, and has focused national attention on the flu vaccine supply and distribution system. Ensuring an adequate and timely supply of vaccine is a difficult task. It has become even more difficult because there are few manufacturers. As we are witnessing this year, problems at one or more manufacturers can significantly upset the traditional fall delivery of influenza vaccine. These problems, in turn, can create variability in who has ready access to the vaccine. Matching flu vaccine supply and demand is a challenge because the available supply and demand for vaccine can vary from month to month and year to year, as the following examples illustrate. In 2000-2001, when a substantial proportion of flu vaccine was distributed much later than usual due to manufacturing difficulties, temporary shortages during the prime period for vaccinations were followed by decreased demand as additional vaccine became available later in the year. Despite efforts by CDC and others to encourage people to seek flu vaccinations later in the season, providers still reported a drop in demand in December. The light flu season in 2000-2001, which had relatively low influenza mortality, probably also contributed to the lack of interest. As a result of the waning demand that year, manufacturers and distributors reported having more vaccine than they could sell. In addition, some physicians’ offices, employee health clinics, and other organizations that administered flu vaccinations reported having unused doses in December and later. For the 2002-2003 flu season, according to CDC officials, vaccine manufacturers produced about 95 million doses of vaccine, of which about 83 million doses were used and about 12 million doses went unused. For the 2003-2004 flu season, shortages of vaccine were attributed to an earlier than expected and more severe flu season and to higher than normal demand, likely resulting from media coverage of pediatric deaths associated with influenza. According to CDC officials, this increased demand occurred in a year in which manufacturers had produced about the same number of doses used in the previous season—about 87 million doses total—and that supply was not adequate to meet the demand. If production problems delay or disrupt the availability of vaccine in a given year, the timing for an individual provider to obtain flu vaccine may depend on which manufacturer’s vaccine it ordered. This happened in the 2000-2001 season, and there are reports of similar problems this season after one manufacturer that had previously stated it expected to supply 46 million to 48 million doses announced that it would not deliver any flu vaccine to the U.S. market. Those who ordered from this manufacturer did not receive their expected vaccine—a different situation than those who ordered from the other manufacturer, which reported sending its vaccine on schedule beginning in August and September. As a result, one provider could have held vaccination clinics in early October that would be available to anyone who wanted a flu vaccination, while another provider may not yet have had any vaccine for its high-risk patients. Shortages of flu vaccine can result in temporary spikes in the price of vaccine. When vaccine supply is limited relative to public demand for flu vaccinations, distributors and others who have supplies of the vaccine have the ability—and the economic incentive—to sell their supplies to the highest bidders rather than filling the lower priced orders they had already received. When there was a delay causing a temporary shortage of vaccine in 2000, those who purchased vaccine that fall—because their earlier orders had been canceled, reduced, or delayed, or because they simply ordered later—found they paid much higher prices. For example, one physician’s practice ordered flu vaccine from a supplier in April 2000 at $2.87 per dose. When none of that vaccine had arrived by November 1, the practice placed three smaller orders in November with a different supplier at the escalating prices of $8.80, $10.80, and $12.80 per dose. On December 1, the practice ordered more vaccine from a third supplier at $10.80 per dose. The four more expensive orders were delivered immediately, before any vaccine had been received from the original April order. With the severely reduced vaccine supply this year, opportunities exist for vendors who have vaccine to significantly inflate the price of available supplies. CDC is collecting information on allegations of such price increases and is providing information to respective state attorneys general. To date, CDC officials report receiving and forwarding over 100 reports of alleged price gouging that they received from 33 states. Following the 2000-2001 flu season, HHS undertook several initiatives to address supply and demand of flu vaccine and to protect high-risk individuals from flu-related complications when vaccine is in short supply. Actions taken include the following: Extending the optimal period for getting a flu vaccination until the end of November, to encourage more people to get vaccinations later in the season. Expanding the target population to include children aged 6 through 23 months. Including the flu vaccine in the Vaccines for Children (VFC) stockpile to help improve flu vaccine supply. For the 2004-2005 flu season, CDC had originally contracted for a stockpile of approximately 4.5 million doses of flu vaccine through its VFC authority—of which 2 million doses were ordered from the manufacturer whose license was temporarily suspended and therefore will not be available. CDC officials said the remaining 2.5 million doses intended for the stockpile will be apportioned as they become available. Taking steps to identify additional sources of influenza vaccine from foreign manufacturers that, once approved for safe use, could help increase the flu vaccine supply in the United States. Our work has also found continuing obstacles to delivering flu vaccine to high-risk individuals in a time of short supply. During the fall 2000 vaccine shortage, for example, targeting limited doses to high-risk individuals was problematic because all types of providers served at least some high-risk individuals. Some physicians and public health officials were upset when their local grocery stores were offering flu vaccinations to everyone when they, the health care providers, were unable to obtain vaccine for their high-risk patients. Many physicians reported that they felt they did not receive priority for vaccine delivery, even though about two-thirds of seniors—one of the largest high-risk groups—generally get their flu vaccinations in medical offices. For the 2004-2005 flu season, despite early indications that one manufacturer was having production difficulties, CDC published guidance in September 2004 stating that it did not envision any need for tiered vaccination recommendations or prioritization of vaccine for those at higher risk of flu-related complications. Following the suspension of one manufacturer’s license and the announcement it would not supply any vaccine to the U.S. market this season, CDC revised its recommendations and took steps to mitigate the vaccine shortage. Although HHS has limited authority to control flu vaccine distribution, upon learning that nearly half of the nation’s expected flu vaccine supply was in jeopardy, it took steps to help direct the available vaccine to help providers get some vaccine for their high-risk patients. In particular, CDC officials have worked with the remaining major manufacturer, as well as state and local health departments, to assess needs, prioritize customers, and make plans to distribute the remaining vaccine. CDC also convened its Advisory Committee on Immunization Practices (ACIP) to reassess and revise the recommended vaccination priorities for the flu season. The revised priority groups for the 2004-2005 flu vaccine include the estimated 85 million people in high-risk groups, but they do not include many of the other target groups. In addition to high-risk individuals, the revised priority groups include an estimated 7 million health care workers and an estimated 6 million household contacts of children aged 6 months or younger, for a total population of about 98 million in the revised priority groups. While CDC can recommend and encourage providers to immunize high- risk patients first, it does not have direct control over the distribution of vaccine (other than the generally small amount that is distributed through public health departments); thus, CDC cannot ensure that its priorities will be implemented. As these actions play out, more time is needed to gauge the success of CDC’s efforts to mitigate the current flu vaccine shortage. Despite the efforts by CDC and others, many high-risk individuals appear to be experiencing problems getting a flu vaccination. Media across the country are reporting that some seniors are waiting hours for flu vaccinations and others are so frustrated they are traveling to Canada or Mexico to get vaccinated. There are other media reports of anxious seniors unable to get vaccinated in a timely fashion. How many high-risk individuals ultimately get vaccinated against influenza this season remains to be seen. We are beginning new work to analyze this year’s vaccine shortage and the federal response. Ensuring an adequate and timely supply of vaccine to protect high-risk individuals from influenza and flu-related complications remains a challenge. The limited number of manufacturers and the manufacturing problems experienced in recent years illustrate the fragility of vaccine production. The abrupt loss of nearly half of the nation’s vaccine supply has further highlighted the potential inequities that can result from the current vaccine distribution system. Under this system, some providers can be left with little immediate recourse for meeting the needs of those most at risk. CDC is responding by working with the remaining major flu vaccine manufacturer and states and local public health agencies to better target high-risk populations. Nonetheless, with this flu season, there are reports of long lines, people crossing international boundaries to obtain their flu vaccinations, and anxious seniors unable to obtain a vaccination on a timely basis. Whatever the outcome of this flu season, ensuring that vaccine can be made available as expeditiously as possible to those who need it most in times of shortage remains a challenge. We shared the facts contained in this statement with CDC officials. They informed us they had no comments. This concludes my statement. I would be happy to answer any questions the Chairmen or other Members of the Subcommittees may have. For further information about this testimony, please contact Janet Heinrich at (202) 512-7119. Jennifer Major, Terry Saiki, Stan Stenersen, and Kim Yamane also made key contributions to this statement. Infectious Disease Preparedness: Federal Challenges in Responding to Influenza Outbreaks. GAO-04-1100T, Washington, D.C.: September 28, 2004. SARS Outbreak: Improvements to Public Health Capacity Are Needed for Responding to Bioterrorism and Emerging Infectious Diseases. GAO-03- 769T, Washington, D.C.: May 7, 2003. Infectious Disease Outbreaks: Bioterrorism Preparedness Efforts Have Improved Public Health Response Capacity, but Gaps Remain. GAO-03- 654T, Washington, D.C.: April 9, 2003. Flu Vaccine: Steps Are Needed to Better Prepare for Possible Future Shortages. GAO-01-786T, Washington, D.C.: May 30, 2001. Flu Vaccine: Supply Problems Heighten Need to Ensure Access for High- Risk People. GAO-01-624, Washington, D.C.: May 15, 2001. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Influenza is associated with an average of 36,000 deaths and more than 200,000 hospitalizations each year in the United States. Persons who are aged 65 and older, people with chronic medical conditions, children younger than 2 years, and pregnant women are more likely to get severe complications from influenza than other people. The best way to prevent influenza is to be vaccinated each fall. In early October 2004, one major manufacturer of flu vaccine for the United States announced that its facility's license had been temporarily suspended and it would not be releasing any vaccine for the 2004-2005 flu season. Because this manufacturer was expected to produce roughly one-half of the U.S. flu vaccine supply, the shortage resulting from its announcement has led to concern about the availability of flu vaccine, especially to those at high risk for flu-related complications. GAO was asked to discuss issues related to the supply, demand, and distribution of vaccine for this flu season in the context of the current shortage. GAO based this testimony on products we have issued since May 2001, as well as work we conducted to update key information. The current vaccine shortage demonstrates the challenges to ensuring an adequate and timely flu vaccine supply. Only three manufacturers produce flu vaccine for the U.S. market, and the potential for future manufacturing problems such as those experienced both this year and to a lesser degree in previous years is still present. When shortages occur, their effect can be exacerbated by the existing distribution system. Under this system, health providers and vaccine distributors generally order a particular manufacturer's vaccine and have limited recourse, even for meeting the needs of high-risk persons, if that manufacturer's production is adversely affected. By contrast, providers who purchased vaccine from a different manufacturer might receive more of their order and be able to vaccinate their high-risk patients. The current situation also reflects another concern: the nation lacks a systematic approach for ensuring that seniors and others at high risk for flu-related complications receive flu vaccine when it is in short supply. Once this year's shortage became apparent, the Centers for Disease Control and Prevention (CDC) took a number of steps to influence distribution patterns to help providers get some vaccine for their high-risk patients. These steps are still playing themselves out, and it will take more time to assess how well they will work. Problems have not been totally averted, however, as there have been media reports of long lines to obtain limited doses of vaccine and of high-risk individuals unable to find a flu vaccination in a timely fashion.
GAO’s work helps to facilitate holding agencies accountable for delivering positive results in an economical, efficient, effective, ethical, and equitable manner. I would like to highlight just a few of our recent efforts to assist the Congress in identifying and addressing areas for continued or additional oversight: Identifying pressing oversight issues for the Congress: On November 17, 2006, I provided three sets of recommendations for consideration as part of the agenda of the 110th Congress. The first set of recommendations suggested targets for near-term oversight, such as the need to reduce the tax gap—the difference between the amounts taxpayers pay voluntarily and on time and what they should pay under the law. The second proposes policies and programs in need of fundamental reform and reengineering, such as reforming Medicare and Medicaid to improve their integrity and sustainability. The third listed various governance issues that need to be addressed, such as the need for various budget controls and legislative process revisions in light of current deficits and our long-range fiscal imbalance. The proposals, which synthesized GAO’s institutional knowledge and special expertise, point to both the breadth and the depth of the issues facing the Congress. Appendix I provides a complete list of the 36 recommendations in our letter. Identifying high-risk areas: We provide updates to our list of government programs and operations that we identify as “high-risk” at the start of each new Congress to help in setting congressional oversight agendas. These reports, which have been produced since the early 1990s, have brought a much-needed oversight focus to a targeted list of major challenges that are impeding effective government and costing the government billions of dollars each year. They help the Congress and the executive branch carry out their responsibilities while improving the government’s performance and enhancing its accountability. In recent years, we have also identified several high-risk areas to focus on the need for broad-based transformations to address major economy, efficiency, effectiveness, relevance, and relative priority challenges. In fact, our focus on high-risk challenges contributed to the Congress enacting a series of governmentwide reforms to strengthen financial management; improve information technology practices; instill a more effective, credible, and results-oriented government; and address critical human capital challenges. Further, our high-risk program has helped sustain attention from members of the Congress who are responsible for oversight and from executive branch officials who are accountable for performance. This Committee has a particular interest in a number of areas on our latest high-risk list. For example, implementation and transformation of the Department of Homeland Security (DHS), protecting the federal government’s information systems, establishing appropriate and effective information- sharing mechanisms to improve homeland security, and Department of Defense (DOD) supply chain management. In part because of the oversight and legislative efforts of the Congress, of the 47 areas that have appeared on our high-risk list since 1990, 18 improved enough to be removed from the list. Such leadership can be invaluable in identifying and putting in place the kinds of change needed to address these often long-standing problems. In our recent January 2007 High-Risk Series update, we added three new high-risk areas; (1) financing the nation’s transportation system, (2) ensuring the effective protection of technologies critical to U.S. national security interests, and (3) transforming federal oversight of food safety. But we also reported that progress had been made in all existing high-risk areas, and that progress was sufficient in two areas for us to remove high-risk designation: (1) U.S. Postal Service transformation efforts and long-term outlook, and (2) HUD single-family mortgage insurance and rental housing assistance programs. This Committee has provided valuable leadership to efforts to gain needed improvements in high-risk areas. In this regard, and, as one example, I want to acknowledge the key commitment and contribution of this Committee in passing postal reform legislation last December. This action was one of the primary reasons we felt that we could take the Postal Service’s transformation and long-term outlook off of our high-risk list in January. As I have been testifying on the need for comprehensive postal reform since 2001, I believe that the recently passed legislation will provide opportunities to build a sound foundation for modernizing the Postal Service, reassessing the service standards required by the American people, and ensuring continued affordable universal postal services for the future. Our work related to areas we have designated as high-risk has also had a financial impact. In fiscal year 2006 alone, actions by both the Congress and the executive branch in response to GAO’s recommendations resulted in approximately $22 billion in financial benefits. Appendix II lists the current high-risk areas. Identifying systemic federal financial management challenges: As I testified yesterday, for the 10th consecutive year, GAO was unable to express an opinion on the federal government’s financial statements due to the government’s inability to demonstrate the reliability of significant portions of the financial statements. Federal agencies will need to overcome three major impediments to our ability to render an opinion on the federal government’s financial statements: (1) resolving serious weaknesses in DOD’s business operations, including pervasive, complex, long-standing, and deeply rooted financial management weaknesses; (2) adequately accounting for and reconciling intragovernmental activity and balances; and (3) developing adequate systems, controls, and procedures to ensure that the consolidated financial statements are consistent with the underlying audited agency financial statement, balanced, and in conformity with generally accepted accounting principles. In testimony earlier this month, I outlined the principal challenges and ideas on how to move forward to fully realizing world-class financial management in the federal government. Additionally, I have suggested to the Congress that it may be time to consider further revisions to the current federal financial reporting model. Such an effort could address the kind of information that is most relevant and useful for a sovereign nation; the role of the balance sheet in federal government reporting; the reporting of items that are unique to the federal government, such as social insurance commitments and the power to tax; and the need for additional fiscal sustainability, intergenerational equity and performance reporting. Addressing governmentwide acquisition and contracting issues: Acquisition issues are heavily represented on GAO’s list of government high-risk areas, and in the 21st century, the government needs to reexamine and evaluate both its strategic and tactical approaches to acquisition and contracting matters. GAO has played an important role in describing the current state of government contracting, identifying the challenges agencies face, and recommending specific steps agencies should take to improve their acquisition and contracting outcomes. I hosted a forum in July 2006 that brought together experts in the acquisition community from inside and outside the government to share their insights on challenges and opportunities for improving federal acquisition outcomes in an environment of increasing reliance on contractors and severe fiscal constraint. The observations from that forum help frame many of the federal acquisition workforce challenges that the government is going to have to wrestle with. In addition, the Congress has assigned GAO the responsibility for adjudicating protests of agency procurement decisions. Our bid protest decisions address specific allegations raised by unsuccessful offerors challenging particular procurement actions as contrary to procurement laws and regulations. In carrying out this role, GAO is instrumental not only in resolving the specific cases at hand, but also helping to focus attention on how various initiatives by both the Congress and the executive branch are being implemented in practice, and we provide Congress with assurance of enhanced transparency, performance and accountability in the federal procurement system. Investing in GAO’s forensic investigation capabilities: This committee actively encouraged and supported the creation within GAO of the additional capacity provided by our new Forensic Audits and Special Investigations (FSI) team in May 2005. This unit integrates the strengths of GAO’s investigative, forensic audit, the FraudNet hotline, and analyst staff. Since its creation, FSI has performed audits and investigations for numerous congressional committees focused on fraud, waste, and abuse and homeland and national security issues. Specifically, for this Committee and the Permanent Subcommittee on Investigations, FSI has delivered testimonies highlighting billions of dollars of delinquent federal taxes owed by government contractors, over $1 billion of potentially fraudulent and improper hurricane Katrina and Rita individual assistance payments, tens of millions of dollars of waste associated with misuse of premium class travel at the State Department, and millions more of waste related to improper use of government aircraft at the National Aeronautics and Space Administration. FSI also testified that it was able to smuggle radioactive materials across the northern and southern borders using counterfeit documents. Recently, FSI hired a senior-level expert in procurement fraud, waste, and abuse, giving it the capability to do targeted work in this area. In fact, the first FSI work in this area is being performed at the request of this Committee and relates to allegations of fraud, waste, and abuse by contractors involved in recovery work following hurricanes Katrina and Rita. GAO’s work helps to identify programs, policies, and practices that are working well, and opportunities to improve their linkages across agencies, across all levels of government, and with nongovernmental partners in order to achieve positive national outcomes. The following are a few examples of our recent efforts to assist the Congress with such insight: Providing a comprehensive framework for congressional oversight of hurricanes Katrina and Rita: We developed a number of crosscutting and comprehensive reviews of aspects of the preparedness for, response to, and recovery from the 2005 Gulf Coast hurricanes. In the immediate aftermath of the storms, staff drawn from across the agency spent time in the hardest hit areas of Louisiana, Mississippi, Alabama, and Texas, collecting information from government officials at the federal, state, and local levels as well as from private organizations assisting with this emergency management effort. We examined how federal funds were used during and after the disaster and identified the rescue, relief, and rebuilding processes that worked well and not so well throughout the effort. We issued over 40 related reports and testimonies to date, focusing on, among other issues, minimizing fraud, waste, and abuse in disaster assistance; rebuilding the New Orleans hospital care system; and developing the capabilities needed to respond to and recover from future catastrophic disasters. Building on this work, we continue to support your Committee and others through a range of audit and evaluation engagements to examine federal programs that provide rebuilding assistance to the Gulf Coast, including the federal government’s contribution to the rebuilding effort and the role it might play over the long term. We are examining lessons learned from past national emergencies and catastrophic disasters—both at home and abroad—that may prove useful in identifying ways to approach rebuilding. Recommending improved management structures for enhancing performance and ensuring accountability: We have identified a chief operating officer (COO)/chief management officer (CMO) position as one approach for building the necessary leadership and management structure that could be used to help to elevate, integrate, and institutionalize responsibility for key functional management initiatives, and provide the continuing, focused attention essential to successfully completing multiyear, high-risk, business transformations. Such a COO/CMO position could be useful in selected agencies with significant transformation and integration challenges, such as DOD, DHS, and the Office of the Director of National Intelligence (ODNI), and would improve accountability within those agencies and to the Congress for outstanding business challenges. In that regard, I was pleased to see that an amendment creating a Deputy Secretary for Management position at DHS was recently accepted by the Senate as part of the proposed Improving America’s Security Act of 2007, and that a similar position would be established in DOD with other legislation recently introduced in the Senate. As you know, in 2005, we reported that as currently structured, the roles and responsibilities of the DHS Under Secretary for Management contained some of the characteristics of a COO/CMO, but we suggested that the Congress should consider whether a revised organizational arrangement is needed at DHS to fully capture the roles and responsibilities of a COO/CMO position. While I believe that a COO/CMO position is highly desirable within DHS and ODNI, I believe it is essential for a successful business transformation effort within DOD. Developing a framework for human capital reform: In recent years, many federal agencies, including DOD, DHS, and GAO, have achieved various legislative flexibilities in the human capital area. Others are seeking such authorities, and a risk exists that the system relating to civil servants will fragment over time. In order to help prevent such a fragmentation and guide human capital reform efforts, we have proposed that there should be a governmentwide framework. A forum that I hosted in 2004 outlined a set of principles, criteria, and processes that establish boundaries and checks while also allowing needed flexibility to manage agency workforces. To help build on this framework, we have provided information on the statutory human capital authorities that the Congress has already provided to numerous federal agencies. Given that there is widespread recognition that a “one size fits all” approach to human capital management is not appropriate for the challenges and demands government faces, we have proposed a phased approach to reform—a “show me” test—that requires agencies to demonstrate institutional readiness before they are allowed to implement major human capital reforms. That is, each agency should demonstrate that it has met certain conditions before it is authorized to undertake significant human capital reforms, such as linking pay to performance. The Congress used this approach in the establishment of a new performance management system for the Senior Executive Service (SES), which required agencies’ systems to be certified before allowing a higher pay range for SES members. Using a governmentwide framework to advance needed human capital reform should be beneficial as the federal government continues to transform how it classifies, compensates, develops, and motivates its employees to achieve maximum results within available resources and existing authorities. Key national indicators initiative: A set of key and outcome-based national indicators can help to assess the overall position and progress of our nation in key areas, frame strategic issues, support more informed policy choices, and enhance accountability. A cooperative initiative to develop a key national indicator system emerged after we, in cooperation with the National Academies, convened a forum in February 2003. This initiative is attempting to develop a key national indicator system for the United States. In response to congressional interest in building upon lessons learned from other efforts both around the country and worldwide, we reported in November 2004 on the current state of the practice of developing comprehensive key indicator systems, identifying design features and organizational options for such a system in the United States. We have also helped increase international understanding and use of indicator systems, such as through my participation in the Organisation for Economic Co-operation and Development’s (OECD) First World Forum on Key Indicators in 2004 and through my upcoming participation in OECD’s Second World Forum, Measuring and Fostering the Progress of Societies, in June 2007. As development of a U.S. key national indicator system progresses, we expect to continue to be involved, building upon prior efforts and in response to congressional interests. Finally, in my view such a key national indicator system is needed, and the Congress should strongly consider a public/private partnership in order to help it become a reality. Our products and assistance to the Congress also focus on a wide range of emerging needs and identify and address governance issues that must be addressed to respond to a broad range of 21st Century challenges and opportunities. I would like to highlight just a few of our recent efforts to assist the Congress with foresight. Increasing public understanding of the long-term fiscal challenge: Since 1992, we have published long-term fiscal simulations in response to a bipartisan request from members of the Congress who were concerned about the long-term effects of our nation’s fiscal policy. Our current simulations continue to show ever-larger deficits resulting in a federal debt burden that ultimately spirals out of control. As the Chief Accountability Officer of the United States Government, I continue to call attention to our long-term fiscal challenge and the risks it poses to our nation’s future. I mentioned earlier my participation with the Concord Coalition, the Brookings Institution, and the Heritage Foundation in the Fiscal Wake-Up Tour. In our experience, having these people, with quite different policy views on how to address our long-range imbalance, agree on the nature, scale, and importance of the issue—and on the need to sit down and work together on a bipartisan basis and start making tough choices now—resonates with the audiences. I have long believed that the American people can accept difficult decisions as long as they understand why such steps are necessary. The Fiscal Wake-Up Tour has received the active support and involvement of community leaders, local colleges and universities, the media, the business community, and both former and current members of the Congress. We have coordinated town hall meetings in 20 states to date with more planned in the future. Improving transparency in connection with financial, fiscal, budget, and selected legislative matters: Washington often suffers from both myopia and tunnel vision. This can be especially true in the budget debate in which we focus on one program at a time and the deficit for a single year or possibly the costs over 5 years without asking about the bigger picture and whether the long term is getting better or worse. Since at its heart the budget challenge is a debate about the allocation of limited resources, the budget process can and should play a key role in helping to address our long-term fiscal challenge and the broader challenge of modernizing government for the 21st century. We are helping to increase the understanding of and focus on the long term in our policy and budget debates. To that end, I have outlined a number of ideas in a draft legislative proposal that we refer to as TAB—Transparency in Accounting and Budgeting. I have been sharing it with selected Members of Congress and others interested in this issue. The proposal would serve to increase transparency in financial and budget reporting as well as in the budget and legislative processes to highlight our long-term fiscal challenges; require publication of a summary annual report and periodic fiscal sustainability reports; and require GAO to report annually on selected financial, fiscal, and reporting matters. I am hopeful that this committee will embrace this proposal and work with other interested members of Congress toward enactment of legislation advancing these important goals. Identifying 21st century challenges: In February 2005 we issued a report titled 21st Century Challenges: Reexamining the Base of the Federal Government, in which we identified challenges our government—and nation—face. The report laid out the case for change and identified a range of challenges and opportunities. It also presented more than 200 illustrative questions that need to be asked and answered. These questions look across major areas of the budget and federal operations, including discretionary and mandatory spending and tax policies and programs. Questions raised specific issues, such as how intelligence and information on threats can be shared with other levels of government, yet be held secure, and whether our current federal income-based tax system is adequate, equitable, competitive, sustainable, and administrable in an increasingly global economy. I am very pleased to see that this important report, among other things, is being used by various congressional committees as they consider which areas of government need particular attention and reconsideration. Continuing to apply a strategic framework to GAO’s work: We will be issuing products soon to help communicate the strategic framework we are using to guide all of our work, in support of the 110th Congress and in light of the challenges the nation faces. Specifically, we will soon issue an update of our strategic plan, which describes our goals and strategies for serving the Congress for fiscal years 2007 through 2012. The broad goals and objectives of our plan have not altered dramatically since our last plan, but events such as the continuing war in Iraq and recent natural disasters account for modifications in emphasis. Appendix III provides a draft summary of GAO’s strategic plan framework for serving the Congress (2007-2012). To assist policymakers and managers, we are also issuing separately a part of the strategic plan that contains detailed descriptions of the key themes and issues framing our strategic plan and their implications for governance. Those themes are listed in the text box below. We will also be issuing a report that brings together in one place the many strategic tools and approaches that we have identified or proposed that the Congress and others can use to help set priorities and move forward in addressing the government’s challenges. Continuously improving on the critical role we play in supporting the Congress will require modest enhancements to GAO’s resources and authorities that I proposed in our fiscal year 2008 budget request and discussed in my Senate appropriations hearing. Our fiscal year 2008 budget request seeks the resources necessary to allow us to rebuild and enhance our workforce, knowledge capacity, employee programs, and critical infrastructure. These items are necessary to ensure that we can continue to provide congressional clients with timely, objective, and reliable information on how well government programs and policies are working and, when needed, recommendations for improvement. In the years ahead, our support to the Congress will likely prove even more critical because of the pressures created by our nation’s current and projected budget deficit and growing long-term fiscal imbalance. GAO is an invaluable tool for helping the Congress review, reprioritize, and revise existing mandatory and discretionary spending programs and tax policies. Shortly after I was appointed Comptroller General in November 1998, I determined that the agency should undertake a major transformation effort. As a result, led by myself and many others, GAO has become a more results-oriented, partnerial, and client-focused organization. With your support, we have made strategic investments; realigned the organization; streamlined our business processes; modernized our performance classification, compensation, and reward systems; enhanced our ability to attract, retain, and reward top talent; enhanced the technology and infrastructure supporting our staff and systems; and made other key investments. These transformational efforts have allowed us to model best practices, lead by example, and provide significant support for congressional hearings, while achieving record results and very high client satisfaction ratings and high employee feedback ratings without significant increases in funding. In fact, despite record results, GAO’s budget has declined by 3 percent in purchasing power from 2003 to 2007, as shown in appendix IV. Transformational change and innovation is by definition challenging and controversial, but at the same time is essential for progress. Our fiscal year 2008 budget request includes funds to regain the momentum needed to achieve our key goals. Specifically, our fiscal year 2008 budget request will allow us to address supply and demand imbalances in responding to congressional requests for studies in areas such as health care, disaster assistance, homeland security, the global war on terrorism, energy and natural resources, and forensic auditing; address our increasing bid protest workload; be more competitive in the labor markets where we compete for talent; address critical human capital components, such as knowledge capacity building, succession planning, and staff skills and competencies; enhance employee recruitment, retention, and development programs; restore program funding levels and regain our purchasing power; undertake critical initiatives necessary to continuously reengineer processes aimed at increasing our productivity and effectiveness and addressing identified management challenges; and pursue deferred and pending critical structural and infrastructure maintenance and improvements. In my recent testimony, I noted that we would be seeking to increase GAO’s staffing level from 3,159 up to 3,750 over the next 6 years in order to address critical needs including supply and demand imbalances, high-risk areas, 21st century challenges questions, technology assessments, and other areas in need of fundamental reform. Furthermore, we plan to establish a presence in Iraq beginning later this fiscal year to provide additional oversight of issues deemed important to the Congress, subject to receiving support from the State Department and approval of our supplemental budget request. In addition to providing the resources we need to support the Congress, we will also be seeking enactment of a set of statutory provisions that would enhance our ability to provide the Congress the information and analysis it needs to discharge its constitutional responsibilities. Among other things, we will seek to modernize authority for the Comptroller General and his/her authorized representatives to administer oaths in performance of the work of the office. To keep the Congress apprised of difficulties we have interviewing agency personnel and obtaining agency views on matters related to ongoing mission work, we will suggest new reporting requirements. When agencies or other entities fail to respond to requests by the Comptroller General to have personnel provide information under oath, make personnel available for interviews, or provide written answers to questions, the Comptroller General would report to the Congress as soon as practicable and also include such information in the annual report to the Congress. These reporting requirements would be a supplement to existing GAO statutory authorities. GAO has authority to audit and access the records of elements of the Intelligence Community. Nevertheless, over the years, the Justice Department has questioned our authority in the area. In that regard, the Congress is considering S.82, The Intelligence Community Audit Act of 2007, sponsored by Senators Akaka and Lautenberg. S.82 would reaffirm GAO’s existing statutory authority to audit and evaluate financial transactions, programs, and activities of the Intelligence Community. The success of the Intelligence Community is obviously of enormous importance to the nation, and it commands significant budget resources. I believe that there are many areas in which GAO can support the intelligence committees in their oversight roles and, by extension, the Congress and the Intelligence Community. For example, we could review human capital management, including pay for performance systems; information technology architectures and systems; acquisition and contract management; information-sharing processes, procedures, and results; and Intelligence Community transformation efforts, metrics, and progress. I would add that while GAO personnel with appropriate clearances and accesses have responsibly reviewed programs that deal with technical sources and methods of intelligence collection, I am confident that there are very few cases in which our review of systems, processes, and their applications would require access to sensitive intelligence sources and methods or names of individuals. In regard to GAO’s human capital flexibilities, among other provisions, we are proposing a flexibility that allows us to better approximate market rates for certain professional positions by increasing our maximum pay for other than the SES and Senior Level from GS-15, step 10, to Executive Level III. This authority has already been granted to selected other federal agencies, including DOD. Additionally, under our revised and contemporary merit pay system, certain portions of an employee’s merit increase, below applicable market-based pay caps, are not permanent. Since this may affect an employee’s high three for retirement purposes, another key provision of the bill would enable these nonpermanent payments to be included in the retirement calculation for all GAO employees, except senior executives and senior-level personnel. We are also seeking enactment of legislation to establish a Board of Contract Appeals at GAO to adjudicate contract claims involving contracts awarded by legislative branch agencies. GAO has performed this function on an ad hoc basis over the years for appeals of claims from decisions of the Architect of the Capitol on contracts that it awards. Recently we have agreed to handle claims arising under Government Printing Office contracts. The legislative proposal would promote efficiency and predictability in the resolution of contractor and agency claims by consolidating such work in an established and experienced adjudicative component of GAO, and would permit GAO to recover its costs of providing such adjudicative services from legislative branch users of such services. Finally, we have identified a number of legislative mandates that are either no longer meeting the purposes intended or should be performed by an entity other than GAO. We are working with the cognizant entities and the appropriate authorization and oversight committees to discuss the potential impact of legislative relief for these issues. I appreciate your support for our efforts to provide the best professional products and services to the Congress. GAO, of course, is not alone in helping the Congress. For example, the inspectors general of the various agencies and departments are essential partners in carrying out congressional oversight. In addition, the Congressional Research Service and Congressional Budget Office have important roles to play. However, GAO is uniquely positioned to provide the Congress with the timely, objective, reliable, and original research information it needs to discharge its constitutional responsibilities, especially in connection with oversight matters. We look forward to continuing to work with you on near-term oversight, fundamental review of the base of government, and approaches to this century’s governance challenges and opportunities. This concludes my prepared statement. I would be happy to respond to any questions the members of the Committee may have. 1. Reduce the Tax Gap 2. Address Governmentwide Acquisition and Contracting Issues 3. Transform the Business Operations of the Department of Defense, Including Addressing All Related “High-Risk” Areas 4. Ensure the Effective Integration and Transformation of the Department of Homeland 5. Enhance Information Sharing, Accelerate Transformation, and Improve Oversight Related to the Nation’s Intelligence Agencies 6. Enhance Border Security and Enforcement of Existing Immigration Laws 7. Ensure the Safety and Security of All Modes of Transportation and the Adequacy of 8. Strengthen Efforts to Prevent the Proliferation of Nuclear, Chemical, and Biological Weapons and Their Delivery Systems (Missiles) 9. Ensure a Successful Transformation of the Nuclear Weapons Complex 10. Enhance Computer Security and Deter Identity Theft 11. Ensure a Cost-Effective and Reliable 2010 Census 12. Transform the Postal Service’s Business Model 13. Ensure Fair Value Collection of Oil Royalties Produced from Federal Lands 14. Ensure the Effectiveness and Coordination of U.S. International 15. Review the Effectiveness of Strategies to Ensure Workplace Safety Policies and Programs That Are in Need of Fundamental Reform and Reengineering 1. Review U.S. and Coalition Efforts to Stabilize and Rebuild Iraq and Afghanistan 2. Ensure a Strategic and Integrated Approach to Prepare for, Respond to, Recover, and 3. Reform the Tax Code, Including Reviewing the Performance of Tax Preferences 4. Reform Medicare and Medicaid to Improve Their Integrity and Sustainability 5. Ensure the Adequacy of National Energy Supplies and Related Infrastructure 6. Reform Immigration Policy to Ensure Equity and Economic Competitiveness 7. Assess Overall Military Readiness, Transformation Efforts, and Existing Plans to Assure the Sustainability of the All-Volunteer Force 8. Assure the Quality and Competitiveness of the U.S. Education System 9. Strengthen Retirement Security Through Reforming Social Security, Increasing Pension Saving and Promoting Financial Literacy 10. Examine the Costs, Benefits, and Risks of Key Environmental Issues 11. Reform Federal Housing Programs and Related Financing and Regulatory 12. Ensure the Integrity and Equity of Existing Farm Programs 13. Addressing Challenges In Broad-Based Transformations Strategic Human Capital Management Managing Federal Real Property Protecting the Federal Government’s Information Systems and the Nation’s Critical Implementing and Transforming the Department of Homeland Security Establishing Appropriate And Effective Information-Sharing Mechanisms to Improve DOD Approach to Business Transformation DOD Personnel Security Clearance Program FAA Air Traffic Control Modernization Financing the Nation’s Transportation System(New) Ensuring the Effective Protection of Technologies Critical to U.S. National Security Interests(New) Transforming Federal Oversight of Food Safety(New) This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The Committee sought GAO's views on the role GAO has played in assisting congressional oversight and the authorities and resources GAO needs to further improve its assistance to the Congress. Today's testimony discusses some of the ways that GAO has helped "set the table" for this Committee, the Congress, the executive branch, and the nation to engage in a constructive and informed dialogue about the challenges and opportunities our nation is facing in the 21st century. It also discusses the authority and resources GAO will need to address the critical oversight and other needs of the Congress. GAO is a key tool for the Congress as it works to improve economy, efficiency, effectiveness, equity, and ethics within the federal government. To better meet the needs of the Congress, GAO has transformed itself to provide a range of key oversight, insight, and foresight services while "leading by example" in transforming how government should do business. GAO's oversight work has traditionally focused on ensuring government entities are spending funds as intended by the Congress and complying with applicable laws and regulations, while guarding against fraud, waste, abuse, and mismanagement. For example, since the early 1990s, GAO has updated its list of government programs and operations across government that it identifies as "high risk." It has contributed to the Congress enacting a series of governmentwide reforms and achieving tens of billions of dollars in financial benefits. Last November, GAO issued recommendations for oversight in the 110th Congress ranging from Iraq, to food safety, to the tax gap. GAO work also provides important insight into what programs, policies, and operations are working well; best practices to be shared and benchmarked; how agencies can improve the linkages across the silos of government; and how different levels of government and their nongovernmental partners can be better aligned to achieve important outcomes for the nation. For example, GAO developed a number of crosscutting and comprehensive reviews of the preparedness for, response to, and recovery from the 2005 Gulf Coast hurricanes. GAO has issued over 40 related reports and testimonies, and in work for this Committee and others GAO is examining lessons learned from past national emergencies and catastrophic disasters--both at home and abroad--that may prove useful in identifying ways to approach rebuilding. Finally, GAO's work can provide the Congress with foresight by highlighting the long-term implications of today's decisions and identifying key trends and emerging challenges facing our nation before they reach crisis proportions. As the Chief Accountability Officer of the United States Government, the Comptroller General continues to call attention to the nation's long-term fiscal challenge and the risks it poses to our nation's future. Continuously improving on the critical role GAO plays in supporting the Congress will require enhancements to GAO's resources and authorities. GAO's fiscal year 2008 budget request seeks resources to allow it to rebuild and enhance its workforce, knowledge capacity, employee programs, and infrastructure. GAO will be proposing changes to its authority, such as the ability to administer oaths in conducting its work, relief from certain mandated reviews, additional human capital flexibilities, and the creation of a Board of Contract Appeals at GAO. Finally, the Comptroller General has noted that GAO should be increased in size over the next 6 years to address the current and anticipated needs of the Congress.
In the past, USDA had several persistent weaknesses in internal control and in accounting and financial reporting that contributed to the OIG’s inability to render an opinion on the department’s consolidated financial statements. The OIG reported, among other things, that USDA was unable to: provide sufficient, competent evidential matter to support numerous material line items on its financial statements including accounts receivable, fund balance with the Department of the Treasury (Treasury), and property, plant, and equipment; and estimate and reestimate loan subsidy costs for its net credit program receivables, rendering it unable to implement the Federal Credit Reform Act of 1990 and related accounting standards. The OIG also identified internal control weaknesses over USDA’s security controls for information technology and financial management systems that do not always process and report departmentwide financial information accurately. Further, the OIG reported that many USDA financial management systems are not fully integrated with other USDA systems. These are some of the factors that required extraordinary effort to derive reliable financial information. Further, we reported in December 2001 that USDA had not yet fully implemented certain key provisions of the Debt Collection Improvement Act (DCIA) of 1996. I will now elaborate on USDA’s progress in correcting these problems and what challenges still remain. USDA has taken actions over the last several years to improve its financial management and to address the weaknesses identified by its OIG and us. For example, in fiscal year 2000, Food and Nutrition Service was, for the first time, able to estimate its gross accounts receivable and related estimate of uncollectible amounts resulting from over-issued benefits in its Food Stamp Program. Further, for the first time since credit reform reporting requirements were implemented in 1994, USDA’s lending agencies were able to estimate and reestimate loan subsidy costs for the department’s net credit program receivables, which totaled about $74 billion as of September 30, 2001. Because of USDA’s achievement in this area, along with that of other key lending agencies, this item was no longer a factor contributing to our disclaimer of opinion on the financial statements of the U.S. government. The OIG also noted that USDA made significant progress during fiscal year 2002 in reconciling its Fund Balance accounts with Treasury’s accounts, thus enabling the OIG to validate this line item on USDA’s fiscal year 2002 financial statements. However, the OIG continued to report this area as a material internal control weakness in fiscal year 2002 due to continuing deficiencies in USDA’s reconciliation processes. For example, USDA had a large backlog of unreconciled items that needed to be researched and resolved. As a result, USDA adjusted its records to agree with the Treasury without reconciling the differences. Over $180 million (net) of year-end adjustments were not supported by transaction-level details. Further, USDA will need to continue its actions in addressing weaknesses in its financial management information systems. In its fiscal year 2002 audit report, the OIG stated that USDA made significant improvements in its overall financial management, such as implementation of a departmentwide standard accounting system, the Foundation Financial Information System (FFIS). At the same time, USDA must fundamentally improve its underlying internal controls, financial management systems, and operations to allow for the routine production of accurate, relevant, and timely data to support program management and accountability. Specifically, the Federal Financial Management Improvement Act (FFMIA) of 1996 requires agencies to institute financial management systems that substantially comply with federal financial systems requirements, applicable federal accounting standards, and the federal government’s Standard General Ledger (SGL). Every year since FFMIA was enacted, the OIG has reported that USDA’s systems did not substantially comply with the act’s requirements. The OIG reported that the lack of compliance stems from USDA’s many disparate accounting systems that are not integrated; material internal control weaknesses; and, as explained earlier, the inability to prepare auditable financial statements on a routine basis. For example, USDA and its agencies operate at least 80 program and administrative systems that support financial management. The longstanding problems associated with these legacy systems were caused, primarily, by the absence of corporate level oversight and planning when these systems were initially developed and upgraded. USDA needs to continue to address the problems with its legacy systems to improve integration of the financial management architecture, timely reconcile its property system with the general ledger, and correct inconsistencies in its accounting processes. Additionally, the OIG continued to report that USDA’s systems are not designed to provide the reliable and timely cost information required to comply with Statement of Federal Financial Accounting Standards No. 4, Managerial Cost Accounting Concepts and Standards. Specifically, the OIG’s review of user fees disclosed that two USDA agencies were not including the full costs of their user fee programs when determining fees and thus, were not recovering the full costs of performing services for their individual programs. Under the President’s Management Agenda for improved financial management performance, agencies are expected to improve the timeliness, enhance the usefulness, and ensure the reliability of financial information. The expected result is integrated financial and performance management systems that routinely produce information that is (1) timely, to measure and effect performance immediately, (2) useful, to make more informed operational and investing decisions, and (3) reliable, to ensure consistent and comparable trend analysis over time and to facilitate better performance measurement and decision making. This result is key to successfully achieving the goals set out by the Congress in the Chief Financial Officers Act and other federal financial management reform legislation. In addition, the Joint Financial Management Improvement Program (JFMIP) Principals have defined success measures for financial management performance that go far beyond an unqualified audit opinion on financial statements and include measures such as financial management systems that routinely provide timely, reliable, and useful financial information and no material internal control weaknesses or material noncompliance with laws and regulations and FFMIA requirements. They also significantly accelerated financial statement reporting to improve timeliness for decision making and to discourage costly efforts designed to obtain unqualified opinions on financial statements without addressing underlying systems challenges. The OIG reported that the Office of the Chief Financial Officer has developed plans to review USDA’s legacy systems, and consolidate and update the systems to meet present accounting standards and management needs. Further, USDA’s September 30, 2002, FFMIA Remediation Plan discussed a number of remedial actions that the department expects to complete by the end of fiscal year 2006. Another financial management challenge for USDA is federal nontax delinquent debt collection. USDA reported holding $6.9 billion of federal nontax debt that was delinquent more than 180 days as of September 30, 2002. The Debt Collection Improvement Act of 1996 (DCIA) gave federal agencies a full array of tools to collect such delinquent debt. Among other things, DCIA provides (1) a requirement for federal agencies to refer eligible debts delinquent more than 180 days to the Department of the Treasury for collection action, and (2) authorization for agencies to administratively garnish the wages of delinquent debtors. In December 2001, we reported that two USDA agencies, Rural Development’s Rural Housing Service (RHS) and the Farm Service Agency (FSA) had failed to make DCIA a priority since its enactment in 1996. Specifically, RHS had not implemented an effective and complete process to refer debts to Treasury mainly because of systems limitations, debt reporting problems, and lack of regulations needed to refer losses resulting from claims paid under its guaranteed single family housing loan program. FSA lacked effective procedures and controls to identify and promptly refer eligible delinquent debts to Treasury. Moreover, USDA had not utilized administrative wage garnishment to collect delinquent nontax debts. Consequently, opportunities for maximizing the collection of delinquent nontax debts as contemplated by DCIA were being missed. USDA officials made a commitment in December 2001 to substantially improve the department’s implementation of DCIA by December 2002. In November 2002, we testified that USDA had made progress in addressing previously identified problems. For example, RHS began referring all reported eligible debt to Treasury. Further, FSA had developed an action plan to improve its process and controls for identifying and referring eligible debts to Treasury. However, at the date of our testimony, challenges remained that will require sustained commitment and priority from top management. For example, RHS still had to complete regulations to refer losses related to its guaranteed single family housing loans to Treasury and an automated process for such referrals, and FSA needed to complete actions needed to ensure that all of its eligible debt is promptly referred to Treasury. In addition, USDA needed to complete regulations that are required to implement administrative wage garnishment department wide and get all of its component agencies to begin using this debt collection tool to the fullest extent practicable. The OIG reported material noncompliance with the DCIA in its fiscal year 2002 financial statement audit report, reiterating the need for sustained commitment and priority by top management. Now I would like to discuss the progress that the Forest Service has made toward achieving financial accountability and remaining challenges. An area of particular concern within USDA continues to be the Forest Service. Historically, the Forest Service’s financial management systems have not generated timely and accurate financial information for its annual audit and for effectively managing operations, monitoring revenue and spending levels, and making informed decisions about future funding needs for its program. In addition, the Forest Service has had long- standing material weaknesses with regard to its two major assets—fund balance with Treasury and property, plant, and equipment. In 1999, we first designated financial management at the Forest Service to be “high risk” on the basis of serious financial and accounting weaknesses that had been identified, but not corrected, in the agency’s financial statements for a number of years. The Forest Service received its first-ever unqualified opinion on its fiscal year 2002 financial statements, which represents noteworthy progress from prior years when the OIG was unable to express an opinion. To achieve its unqualified opinion, the Forest Service’s top management dedicated considerable resources and focused staff efforts to address accounting and reporting deficiencies that had prevented a favorable opinion in the past. For example, during fiscal year 2002 the Forest Service formed a reconciliation strike team to resolve long-standing real and personal property accounting deficiencies. The property, plant, and equipment reconciliation team analyzed transaction data to identify inaccurate records and reconciled the general ledger to its supporting detailed records. In addition, the strike team, in cooperation with the USDA Office of the Chief Financial Officer, the USDA OIG, and consultants, worked to ensure that property documentation supported property records, inventories were complete, and property was valued correctly. Further, the team worked with USDA on modifications and enhancements to certain property feeder systems. Because the Forest Service property comprises 80 percent of the $4.2 billion line item on USDA’s financial statements, the OIG was able to validate this number for its fiscal year 2002 opinion. However, material deficiencies in the controls related to the accurate recording of property, plant, and equipment transactions remain. For example, the financial statement auditor reported instances in which recorded amounts did not agree with supporting documentation and inappropriate payroll expenses were included in property values instead of being recorded as expenses, resulting in an overstatement of property and an understatement of expenses. Further, the Forest Service did not have effective controls over the initial recording of acquisition costs, in-service date, and useful life of property items. Because the Forest Service did not require reviews of data input for property transactions by a supervisor, another independent person, or by automated system edit checks within property systems, certain property items were not recorded properly. While the Forest Service made significant progress in fiscal year 2002 to reconcile its fund balance with Treasury accounts, the financial statement auditor noted significant control deficiencies in its reconciliation processes. For example, the Forest Service needs to research a large backlog of unreconciled items and take corrective actions. In order to bring the Forest Service’s fund balance with Treasury accounts into balance with Treasury records as of September 30, 2002, the Forest Service recorded an adjustment of $107 million. Although the Forest Service reached an important milestone by attaining a clean audit opinion on its financial statements, it has not yet proven it can sustain this outcome, and it has not reached the end goal, as envisioned by the President’s Management Agenda for improved financial management and the JFMIP Principals, of routinely having timely, accurate, and useful financial information. The Forest Service continues to commit considerable resources to correcting its financial management weaknesses; however, much work remains. In our January 2003 high-risk update, we again designated financial management at the Forest Service as “high risk” on the basis of its serious internal control weaknesses. In closing, Mr. Chairman, I want to emphasize that USDA has made significant progress in addressing its major challenges related to financial management and continues to do so. At the same time, before USDA is able to sustain financial accountability and produce relevant, reliable, and timely information to effectively manage the department, it and its component agencies, particularly the Forest Service, must resolve some very difficult issues. This concludes my statement. I would be happy to answer any questions you or other members of the subcommittee may have. For information about this statement, please contact McCoy Williams, Director, Financial Management and Assurance, at (202) 512-6906, or Alana Stanfield, Assistant Director, at (202) 512-3197. You may also reach them by e-mail at williamsm1@gao.gov or stanfielda@gao.gov. Individuals who made key contributions to this testimony include Lisa Crye and Jeff Isaacs. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
In January, we issued our Performance and Accountability Series on management challenges and program risks at major agencies, including the U.S. Department of Agriculture (USDA). The report for USDA focused on a number of major management challenges, including enhancing financial management, and continued the high risk designation for Forest Service financial management. For many years, USDA struggled to improve its financial management activities, but inadequate accounting systems and related procedures and controls hampered its ability to get a clean opinion on its financial statements. After eight consecutive disclaimers of opinion, USDA's Office of Inspector General issued an unqualified opinion on USDA's fiscal year 2002 financial statements and reported that significant progress had been made in improving overall financial management. For each of USDA's agencies that prepared separate financial statements for fiscal year 2002, the audit opinions were also positive. Specifically, unqualified audit opinions were issued on the financial statements of the Forest Service, Federal Crop Insurance Corporation/Risk Management Agency, Commodity Credit Corporation, the Rural Development mission area, and the Rural Telephone Bank. While we consider these clean opinions a positive step, some of these could not have been rendered without extraordinary efforts by the department and its auditors. Achieving financial accountability will require more than heroic efforts to obtain year-end numbers for financial statement purposes. Without reliable financial systems and sound internal controls, it is not possible to have sound data on a timely basis for decision making. Before USDA can achieve and sustain financial accountability, and thus be in a position to have reliable system-generated data as needed, it and its component agencies, particularly the Forest Service, must address a number of serious problems that USDA's Office of the Inspector General (OIG) or we have reported. In the past, USDA had several persistent weaknesses in internal control and in accounting and financial reporting that contributed to the OIG's inability to render an opinion on the department's consolidated financial statements. The OIG reported, among other things, that USDA was unable to provide sufficient, competent evidential matter to support numerous material line items on its financial statements including accounts receivable, fund balance with the Department of the Treasury and property, plant, and equipment. The OIG also reported that USDA was unable to estimate and reestimate loan subsidy costs for its net credit program receivables, rendering it unable to implement the Federal Credit Reform Act of 1990, and related accounting standards. USDA has taken actions over the last several years to improve its financial management and to address the weaknesses identified by its OIG and us. For example, in fiscal year 2000, Food and Nutrition Service was, for the first time, able to estimate its gross accounts receivable and related estimate of uncollectible amounts resulting from over-issued benefits in its Food Stamp Program. Further, for the first time since credit reform agencies were able to estimate and reestimate loan subsidy costs for the department's net credit program receivables, which totaled about $74 billion as of September 30, 2001. Because of USDA's achievement in this area, along with that of other key lending agencies, this item was no longer a factor contributing to our disclaimer of opinion on the financial statements of the U.S. government.
IRS estimated, for tax year 2001, that $11 billion of the tax gap could be attributed to individual taxpayers who misreport income from capital assets, such as securities and other assets owned for investment or personal purposes. Specific to securities transactions, we estimated based on IRS data and examination of case files, that for the same year, 38 percent of individual taxpayers misreported their capital gains or losses. To help prevent some taxpayer misreporting, brokers must, under the new requirements, report the adjusted cost basis for certain securities on a revised Form 1099-B, “Proceeds From Broker and Barter Exchange Transactions.” For certain securities, brokers must begin collecting these data on January 1, 2011, and report them to IRS in 2012. Brokers must begin collecting information on additional securities, beginning on January 1, 2012. Generally, a taxpayer’s gain or loss from a securities sale is the difference between the gross proceeds from the sale and the original purchase price, or cost basis, net of any fees or commissions. However, to determine any gains or losses from securities sales, the taxpayer must determine if and how the original cost basis of the securities must be adjusted to reflect certain events, such as stock splits. For years, brokers have been required to report information on Form 1099-B such as descriptions of securities sold, sales date, and gross proceeds. However, the law changed what information is reported and who reports it. Prior to the law’s effective date, the taxpayer was responsible for calculating cost basis and reporting it to IRS on their tax return. Now, brokers will be responsible for reporting cost basis information to taxpayers and IRS on the Form 1099-B. The Form 1099-B is due to taxpayers on February 15, and to IRS on February 28 for paper returns and March 31 for electronic returns, for the prior calendar year’s security sales. Additional changes resulting from the law are described in table 4, appendix II. Transaction settlement reporting is expected to help IRS identify and prevent the underreporting of businesses income. Under the new requirements, all merchant transactions completed beginning on January 1, 2011, in which either a payment card or a third-party payment network is used as the form of payment, must be reported by payment settlement entities (PSE) on the new Form 1099-K, “Merchant Card and Third Party Network Payments.” Information reporting on merchants—businesses that accept payment cards or payment from a third-party settlement organization for goods and services—is new for the transaction settlement industry. A payment card is a card-based payment, such as a credit card, debit card, or prepaid telephone card, which is accepted by a group of unrelated merchants. For example, a gift card for a shopping mall is a payment card because it is accepted as payment at a network of unrelated stores; however, a gift card for a specific store is not a payment card because it is only accepted by the store that issued it. A third-party payment network accepts various forms of payment from a customer to settle transactions with merchants who are unrelated to the network. Examples of third-party payment networks include PayPal, certain toll road automated payment systems, and certain shared service organizations (such as certain accounts payable services). A PSE—a bank or other organization that processes transactions and makes payments to the merchant accepting the payment card or the third- party settlement organization that makes payment to the merchant—is responsible for reporting payment card and third-party network transactions annually to IRS and to the merchant, on the Form 1099-K. The new requirements direct PSEs to report the gross amount of reportable payment transactions, which is the total dollar amount of aggregate transactions for each merchant, for each calendar year, without regard to adjustments for credits, cash equivalents, discounts, fees, refunds, or other deductions. In some cases, more than one PSE may be involved in a single transaction, in which case the PSE that actually makes payment to the merchant is responsible for filing the Form 1099-K. When a customer (cardholder) purchases goods or services from a merchant using a payment card, the merchant submits the transaction to the PSE for approval. The PSE submits a request through the card network, such as Visa or Mastercard, to the bank or other entity that issued the card (issuer). The issuer checks the customer’s account to determine if the customer is able to cover the cost of the transaction. If so, the issuer bills or debits the customer’s account for the amount of the transaction. Figure 1 shows this process for a typical credit or debit card transaction, two commonly used types of payment cards. Third-party payment network transactions are similar to credit and debit card transactions in that the third-party network facilitates transactions between unrelated merchants and customers. Third-party payment networks have widely varying business models, and can encompass many different types of payment situations that are not easily generalized, according to IRS officials and industry representatives. Typically, a customer pays the third-party settlement organization for a transaction with an agreed upon form of payment, which may include a payment card, and the third-party settlement organization settles the transaction with the merchant. One example of a third-party network is certain toll collection networks. Some states that operate toll roads contract with a third-party settlement organization to bill customers for road usage. The third-party settlement organization provides a system that allows the toll facility to record the passage of a vehicle with a transmitter inside. The third-party settlement organization periodically bills customers’ accounts and makes payments to the state to settle the toll transactions. IRS initiated the IRDM program in 2009 in part to implement the two new information reporting requirements, but more generally to increase voluntary compliance by expanding and maximizing its ability to use existing and future information returns and establishing a new business information matching program. Formerly, IRS had only matched information returns to individuals’ and sole proprietors’ tax returns. Under IRDM, IRS plans to build several new information technology (IT) systems and enhance some existing systems as well as implement numerous organizational and process changes. Specifically, IRS plans for IRDM to house a new process to use information returns to identify individual and business tax returns that are likely sources of revenue and that are overlooked by the current individual tax return matching system. IRDM implementation involves many IRS groups and offices, and is led by the Small Business/Self Employed (SB/SE) division and Modernization and Information Technology Services (MITS). The Research Analysis and Statistics (RAS) division, the Office of Chief Counsel, and the Tax Forms and Publications group also have important roles in IRDM implementation. For example, RAS is working with IRDM on a research plan to assess the effectiveness of the program. IRDM capabilities will be implemented in stages, beginning in 2012. IRS developed a series of plans to implement the IRDM program, which will be used to implement the new cost basis and transaction settlement reporting requirements. IRDM plans cover program scope, management structures, information technology system development, communications with stakeholders, and other aspects of IRDM implementation. We found that IRDM implementation plans generally are consistent with criteria for effective program planning and implementation listed in our prior reports and IRS guidance. For example, these criteria call for a leadership structure, an internal communication strategy, staffing and training provisions, a review process, risk management, and alignment with the agency strategic plan. IRDM has a leadership structure, headed by an executive steering committee at the highest level, with authority over the IRDM Governance Board whose functions include program management and coordination. It has a stakeholder management and outreach plan that specifies communication strategies, as well as a detailed staffing and organizational development plan to implement document matching for business taxpayers. IRS’s plans include provisions to review and assess the program for continuous improvement, such as a requirement to document lessons learned at the end of each significant project phase. Furthermore, IRDM has a plan that assesses and provides for the management of program risks and a plan that provides for analysis of related technology system interdependencies. IRS has begun implementing several of these plans, to various degrees. It is too early in the implementation process to comprehensively assess whether IRDM has followed all of its plans or achieved outcomes and whether these efforts will be effective. Regarding the schedule of IRDM implementation, we found that IRS has met most time lines established in the program implementation plans, with two notable exceptions: the release of final regulations for cost basis and transaction settlement reporting, to be discussed later in this report, and certain software development milestones. Specifically, MITS did not meet milestone dates for the development and testing of the software expected to enhance IRS’s ability to select potential individual taxpayer cases due to a procurement delay for the associated hardware, which delayed testing by 1 month. This software is also expected to aid in the development of a new IT system to select business taxpayer cases for review. In response to the delays, IRS officials said they have re- prioritized work and, as of May 2011, officials said they do not expect the delays to affect the program’s progress. The IRDM Strategic Roadmap is the foundational plan for IRDM that describes the program’s scope, desired outcome, implementation phases, and time line. IRS guidance and our prior work state that comprehensive plans for implementing a new program should link with the agency’s strategic plan and align with its core processes and agencywide objectives. The Strategic Roadmap is aligned with IRS’s Strategic Plan, which guides and sets goals for IRS’s work at a high level. For example, the Strategic Plan establishes a goal of enforcing the law to ensure that everyone meets their obligation to pay taxes which, according to the Strategic Roadmap, IRDM intends to support by using third-party information reporting to increase voluntary compliance and treat noncompliance. However, the Strategic Roadmap and other IRDM plans do not document coordination with some significant recent and ongoing servicewide initiatives, such as Workforce of Tomorrow and the Nonfiler Strategy. IRS officials said they met with the initiatives’ team members to coordinate, but did not document that coordination occurred and whether or how this coordination ensured that IRDM and other servicewide initiatives were consistent and would work well together. We did not find any aspects of IRDM plans that conflict with Workforce of Tomorrow or the Nonfiler Strategy, but documenting that IRDM plans are coordinated with servicewide initiatives would be consistent with internal control standards and could facilitate oversight, help prevent duplicative efforts, and foster a common understanding of program plans and activities. For example, IRDM has a workforce plan for staffing a new organization to work business taxpayer cases identified by the new document matching process. The plan addresses hiring, training, and leadership, but does not show coordination with the servicewide Workforce of Tomorrow Task Force and its specific recommendations to improve IRS’s overall recruiting methods, hiring strategies, and leadership development. The Workforce of Tomorrow report notes that better coordination of leadership development efforts across IRS could lead to more consistent application of talent management tools and more effective use of processes and data for servicewide decision making. IRDM is also planning new processes for identifying businesses that do not file tax returns, including an incipient business Automated Substitute for Return program. An IRDM plan recommends combining the planned business Automated Substitute for Return program with a related enforcement program for business nonfilers. This plan does not show coordination with IRS’s servicewide Nonfiler Strategy or discuss the Nonfiler Strategy’s potential effect on IRDM functions. IRS’s Nonfiler Strategy noted that a lack of coordination in nonfiler work results in ineffective resource allocation. IRS provided us with a document that officials stated was used to inform the Strategic Roadmap; it cites how IRDM will make some accommodations for nonfiler programs, but it does not mention or discuss coordination with the Nonfiler Strategy. IRDM plans could demonstrate coordination with Workforce of Tomorrow and the Nonfiler Strategy by describing IRDM’s relationship with and its effect on these initiatives. After we discussed the issue with IRDM management in January 2011, officials said they are working to document this coordination in an updated version of the Strategic Roadmap, but they had not done so as of May 2011. SB/SE and MITS, the primary IRS divisions involved in IRDM implementation, each estimated their share of IRDM program costs for IRS’s budget. SB/SE’s first budget request for IRDM, about $36 million, was made for fiscal year 2012. Officials expect that annual funding will increase as the program becomes fully operational and then remain steady for as long as IRDM continues to operate. SB/SE worked with the IRS Chief Financial Officer’s (CFO) office to develop its budget request. SB/SE calculated staffing needs based on the number and types of cases it anticipated, then used a calculator developed by the CFO’s office to determine the cost of the staff, including salaries, benefits, training, facilities, and other direct and indirect costs. MITS’s work on IRDM was funded at $23 million during fiscal year 2010, and IRS plans for funding to continue at this level through fiscal year 2016, yielding a total cost of about $166 million for fiscal years 2009-2016. MITS developed an initial cost estimate in 2008 to formulate its budget request. Total costs for IRDM since the program’s inception are shown in table 1. According to best practices established by the GAO Cost Estimating and Assessment Guide, a cost estimate should be comprehensive, well documented, accurate, and credible. However, MITS’s IRDM cost estimate does not fully meet these four best practices (for a description of the best practices and the extent to which MITS met each characteristic, see app. III). For example, The estimate substantially meets the best practices for a comprehensive cost estimate. The estimate covers most life-cycle costs, is supported by a document that defines the work needed to accomplish the program’s objectives and relates cost and schedule to deliverables, and provides technical descriptions for each project phase. However, although it defines assumptions and estimating standards, also referred to as ground rules, the cost estimate does not cite a rationale for the assumptions and only considers the impact of risks on a portion of the estimate. The estimate partially meets best practices for a well documented cost estimate. It provides technical descriptions for each project phase and documents a management briefing, but it does not contain many details about the underlying data used to develop the estimate. MITS used a computer model to calculate the cost estimate, but the formulas built into this model and the resulting calculations are not shown. Thus, it would not be possible for another cost analyst outside IRS to use available documentation to recreate the estimate without access to this computer model. Moreover, although IRS provided documentation of its general cost estimation methodology, the methodology used to develop this cost estimate was not provided at a meaningful level of detail. The estimate partially meets best practices for an accurate cost estimate. The model used to calculate the estimate was developed using data from other comparable projects, which provides insight into actual costs on similar programs. However, inflation was not included. According to IRS officials, inflation is not applied to cost estimates because it is factored in automatically during the budget process. If inflation were included in the cost estimate, it would be double-counted in the budget. Applying inflation is an important s creating a cost estimate and it is a best practice for inflation to be included and documented when creating cost estimates. Cost data must be expressed in like terms, which requires the transformation of historical or actual cost data into constant dollars. Additionally, the cost estimate does not explain variances between planned and actual costs because the estimate was developed before the program started; there were no actual IRDM cost data available. A comparison between the original estimate and actual costs would allow estimators to see how well they are estimating and how the program is changing over time. In 2008, the Treasury Inspector General for Tax Administration (TIGTA) recommended that IRS provide similar information. The estimate minimally meets best practices for a credible cost estimate. It contains a risk analysis, but it only addresses risks on a small portion of the overall costs and how the risk analysis was done is not clearly documented. IRS performed a cross-check on the estimate by using an alternative estimation method to see if it produced similar results. Specifically, IRS did one cross-check by comparing the estimate to an expected ratio of operations costs and nonrecurring costs. However, there was no evidence that other cross-checks were performed. Further cross-checks using different calculation methods could enhance the estimate’s reliability if they showed that different methods produce similar results. In addition, the cost estimate does not contain a sensitivity analysis, which would examine the effects of changing assumptions and estimating procedures and therefore highlight elements that are cost sensitive. IRS officials said they typically do not perform a sensitivity analysis unless the program has reached its preliminary design phase, which was not the case when they estimated IRDM costs. Furthermore, although the IRS group that did the cost estimate was independent from the IRDM program office, IRS did not obtain an independent cost estimate conducted by an outside group to validate it. According to officials, due to limited resources, IRS generally only does an additional independent cost estimate for its largest programs and does not do an additional estimate for a cost estimate done at the start of a program. Therefore, because IRDM is not a large program, according to officials, and because its cost estimate was done before the program started, an additional independent cost estimate was not done. Although we recognize that it would be challenging for IRS to do an independent cost estimate for each project because IRS lacks the resource to do so, it is a best practice to do an independent cost estimate because it would provide an unbiased test of whether the original cost estimate is reasonable. IRS officials said that, because their cost estimation procedures became more robust after this cost estimate was prepared in 2008, a revised cost estimate would follow best practices to a greater extent. Officials also said that they could more accurately estimate costs now that they know more about the IRDM program, and that they are considering revising the estimate but may not do so due to limited resources. If IRS revises the IRDM cost estimate, following best practices from our cost estimating guide could enhance its reliability. IRDM did not use substantiated volume projections for the new Form 1099-K in some of its budget and risk management decisions because official projections were not available when those decisions were made. Making decisions without substantiated projections puts IRS at risk for misallocating resources. To support sound decisions, the source or method for obtaining data supporting decisions should be documented. IRS research standards say that data must be validated, any limitations must be disclosed, and documentation must be made available. More specifically, IRS and industry guidance establish that estimates used in project planning should have a sound basis and documentation to instill confidence that any plans based on estimates are capable of supporting project objectives. IRS produced three different projections of the number of Forms 1099-K, expected to be filed annually. One of these—the projection a contractor developed in 2010 to assess the capacity of MITS’s Filing Information Returns Electronically (FIRE) system—was developed without consulting RAS, which produces form volume projections that IRS considers reliable. This projection, and the 125 million projection SB/SE developed in 2006, also lack documentation of the assumptions and methods used to develop them. Table 2 describes the three Form 1099-K volume projections and the decisions that were based on them. The 125 million projection was used in part to calculate SB/SE’s fiscal year 2012 request for about $36 million and 415 full-time equivalent staff: it factored into staffing calculations such as the number of employees needed to screen potential cases and respond to discrepancies between Forms 1099-K and related business tax returns. Other data also factored into these budget and staffing calculations. In addition, the 60 million projection was used to make decisions about MITS information technology needs. The supported preliminary Form 1099-K projection produced by RAS is less than half of the projection used to inform SB/SE’s staffing calculations. IRDM officials were unable to provide documentation of the methodology used to develop SB/SE’s 125 million projection, but did provide us some of the assumptions. The FIRE Capacity Study does not provide sufficient methodology or documentation to support its findings, including its Form 1099-K projection. For example, the study says that the assessment team obtained future volume projections by holding meetings and exchanging e-mails, but does not explain how those projections were calculated or the basis of the information. IRDM identified the potential for new information returns to strain FIRE’s capacity as a program risk. The contractor’s capacity study, which IRS intended to address this risk, cannot reliably do so without substantiated data inputs. RAS is responsible for producing reliable form volume projections for IRS decision making, but RAS had not yet produced an official Form 1099-K projection at the time of the formation of the fiscal year 2012 IRDM budget request and the FIRE study’s release. RAS officials were not involved in developing the projection used in the FIRE Capacity Study. Consulting RAS when using Form 1099-K projections in decision making could enhance the reliability of those projections. Since we identified the issue, officials said they plan to reassess whether the FIRE system can handle incoming information returns using RAS’s preliminary projection, but they had not done so as of May 2011. Prior to preparing the proposed regulations for cost basis and transaction settlement reporting, IRS counsel met in person and via phone with industry stakeholders to gain an understanding of issues facing the industries. Treasury officials, who worked with IRS and ultimately approve the regulations, also met with industry stakeholders. Additionally, prior to publishing proposed regulations, IRS posted notices in the Internal Revenue Bulletin to solicit responses to questions and comments on, among other things, the definitions of key terms. Representatives from the four cost basis and transaction settlement industry groups we interviewed said IRS was responsive to their concerns and that its initial outreach and information gathering efforts were good. In addition to direct communication with industry groups, IRS also relied on the Information Reporting Program Advisory Committee (IRPAC), whose members include tax professionals and industry representatives, for input. Once each of the two proposed regulations were published, IRS conducted a public hearing and officials communicated with industry through the public comment letter process. As evidenced in lessons learned from a prior IRS implementation effort, this early engagement of external stakeholders is important in the development of the compliance and operational functions for new tax legislation. Due, according to IRS officials, to unanticipated complexities of the cost basis and transaction settlement industries, IRS counsel did not meet its target dates for issuing final regulations for either reporting requirement, as shown in figure 2. Final regulations on cost basis reporting were issued in October 2010, and for transaction settlement reporting in August 2010. Both laws establish January 1, 2011, as the effective date for data collection to begin—over 2 years after the laws’ enactment in 2008. Reporting data are due to IRS in 2012 for both laws. Although IRS missed its target dates by about a year, the turnaround for finalizing regulations was relatively fast, according to IRS counsel, especially when compared with other information reporting rulemaking. One cost basis group acknowledged the short time between the enactment and the effective date of the laws. IRS officials said that the rulemakings did not meet deadlines because the cost basis and transaction settlement industries were more complex than they anticipated and learning them required more time than expected. Furthermore, according to IRS counsel, IRS does not have complete control over the timing of the issuance of regulations because they must be approved by the Department of the Treasury, which sets priorities for when regulations are issued. The cost basis and transaction settlement reporting regulations were given priority, having been listed in Treasury’s 2009-2010 Priority Guidance Plan. However, Treasury counsel said the rulemakings posed unique challenges, such as learning new systems and becoming familiar with the industries affected by the regulations. Another Treasury official said that their review process for these regulations was relatively fast given the complexities. After the final regulations were issued, the cost basis and transaction settlement industries had, respectively, 2 ½ months and 4 ½ months before data collection was to begin. According to IRPAC and representatives from both industries, the timing of final regulations left the industries with a short implementation time. Three cost basis groups said that while the legislation was under development, they requested from congressional staff 18 months to implement any information systems or other changes needed to comply with final regulations; third-party payment networks said they requested a year. A senior IRDM official said companies could have started systems development before regulations were final. Although some cost basis and transaction settlement industry members used proposed regulations to guide their initial implementation, IRPAC representatives said companies had to make some assumptions about what would be in the final regulations, which increases costs. The short implementation time may affect the quality of data sent to IRS. One cost basis industry group told us that small firms may not be ready to comply with the regulations and, as a result, taxpayers and IRS may receive inaccurate data on the Form 1099-B from those firms. Although cost basis industry representatives believe it is too soon to tell which data quality issues will be most pressing, they pointed out that there may be significant inconsistencies in gifted and inherited securities because calculation methods are unclear and systems were not fully prepared for implementation. If these securities are transferred to other brokers, data quality issues may follow, resulting in long-term consequences for securities gifted or inherited in 2011, but sold in later years. A transaction settlement industry group identified several issues as potential data quality challenges, including that the industry does not identify merchants based on Taxpayer Identification Numbers (TIN). According to third-party payment network representatives, it is too soon to tell how data may be affected by the short implementation time. After IRS’s issuance of final regulations, industry stakeholders sought clarification of certain issues. IRS did not provide additional written guidance or participate in outreach events until after the effective dates of the regulations. IRS officials told us that timing of the additional guidance resulted from a lengthy review process, which included IRPAC’s review of FAQs for transaction settlement regulations. Regarding outreach, IRDM officials told us that IRDM planned to begin outreach once final regulations were issued so that messages would be based on stable information. Continuous engagement of external stakeholders is important to ensure compliance with new tax legislation. Because IRS did not release clarifying guidance or continue outreach until after the effective dates of the laws, industry groups experienced a gap in communication from IRS which, according to industry representatives, could affect implementation. Four industry groups told us after the final regulations were issued that they were awaiting additional information, including clarification on certain reporting responsibilities, which could affect their implementation of the laws. For example, one cost basis group pointed out that taxpayer confusion associated with reporting wash sales may cause a large volume of corrected Forms 1099-B during the year following implementation. IRS released a Frequently Asked Questions (FAQ) document for cost basis reporting on its Web site in March 2011. For transaction settlement reporting, as of May 4, 2011, IRS had not released additional written guidance since issuing the final regulations. IRS counsel said some transaction settlement companies and cost basis entities have contacted them about technical details of implementation, such as filling out forms, and that IRS has spoken with them. Additionally, outreach events that will cover both laws, such as speaking at events for tax professionals, began in February 2011 and, as of April 2011, are scheduled through November 2011. IRPAC and two industry groups we spoke with said they are not always aware of IRS's plans for issuing guidance or beginning additional outreach. The transaction settlement industry’s implementation also could be affected by the gap in guidance and outreach after the regulations were issued. For example, the definition in the regulations for “third-party payment network” is broad, according to representatives of several third- party payment network companies. The definition could lead some companies to question whether they will need to file a Form 1099-K, according to the companies. IRS counsel acknowledged that the applicability of the definition depends on a company’s specific business model and said the regulations could not address all possible examples of third-party payment networks. IRS counsel said they plan to post FAQs on their Web page and to do letter rulings on request. Third-party payment network representatives we contacted told us they were unaware of IRS's plans. In addition to IRS counsel’s communication with reporting entities, IRDM established a team and a plan for stakeholder outreach. IRDM hired an employee shortly after the laws’ effective dates to lead the communication team, and IRDM participated in its first external outreach event at the end of February 2011. Earlier action by the IRDM outreach team might have helped to bridge the communication gaps between IRS and the cost basis and transaction settlement industries. Earlier outreach might have also helped IRS raise awareness among companies, such as certain third-party payment networks, who may not be aware that they will be required to report. The IRDM Stakeholder Management and Communication Plan provides a potentially useful framework to analyze stakeholders’ concerns and to prescribe appropriate IRDM responses. For example, the plan describes a methodology for analyzing the potential effect of IRDM regulations on stakeholder groups, and the degree of influence of each stakeholder. The IRDM team is to analyze stakeholder concerns and ideas, summarize trends, and develop strategies for specific groups. The plans also emphasize the need to gauge the effectiveness of IRDM communications. This framework, if followed, could be a useful tool to help identify and assess stakeholder needs. IRS already has a Web page on cost basis, transaction settlement, and other new information reporting requirements. The page contains copies of the information returns and regulations for both laws, cost basis FAQs, and, for transaction settlement stakeholders, instructions for using IRS’s TIN Matching Program. The page does not contain prospective information about upcoming guidance or outreach or, for other information reporting laws, upcoming rulemaking actions. The Department of Transportation has a Web page that contains information about the status of significant rulemakings, including scheduled milestones, actual dates that milestones were met, and explanations for any delays. The page is a public version of more detailed internal tracking of rulemaking milestones and assessing schedules, which helps department officials determine if a rule is on or behind schedule, based on target dates. A representative from a cost basis industry group referred us to a similar Web page run by the Financial Industry Regulatory Authority, which also contains outreach information on securities regulations. Additional Web-based information from IRS, such as information about upcoming events or IRS’s approach to letter rulings, could benefit industry stakeholders. IRS could use the Transportation or Financial Industry Regulatory Authority pages as a guide for enhancing its Web-based information on regulations and guidance, and could also include outreach information. Such information could be especially helpful for the cost basis industry as IRS begins a new rulemaking for additional securities that will be required to collect cost basis information beginning in 2013. Representatives from the cost basis and transaction settlement industries said such a Web page, if kept up to date, would aid in their implementation of the laws. Officials at IRS told us their ability to provide projected issuance dates for regulations is limited by the uncertainties in Treasury’s review process. An official in Treasury’s Office of Tax Policy agreed that their review process, which could result in significant revisions, makes it challenging to post projected release dates that are useful and accurate. According to the official Treasury does not have an internal system for tracking rulemaking. However, Treasury and IRS officials could work together to provide projected release dates to the public. Posting other information, such as upcoming outreach events and the release of informal guidance, such as FAQs, would also be beneficial. IRS released draft versions of the new Form 1099-K and the revised Form 1099-B for tax year 2011 when it released the proposed regulations in late 2009 for each law; however, IRS did not release draft instructions for either form because, according to officials, they were not complete at that time. IRS solicited comments on the forms during the rulemakings process and continued communication afterwards with industry groups as new drafts were created. IRS has since posted final instructions for both forms, and officials told us they are taking comments on the instructions through August 2011. IRPAC representatives said they were unable to adequately comment on the draft forms without seeing definitions and other explanations typically included in instructions, and cost basis and transaction settlement industry stakeholders also emphasized the need for instructions to help in their implementation of the laws. Not having instructions available when draft forms were issued left industry stakeholders with some key unanswered questions, whose outcomes may affect their system development efforts and ultimately data reported to taxpayers and IRS. For example, some transaction settlement representatives asked IRS why the Form 1099-K requests the gross amount of “payments” rather than the gross amount of “reportable payment transactions” as required in the regulations. For the transaction settlement industry, there is a difference between a payment and a transaction that could affect the dollar amount reported. Specifically, the transaction amount of a purchase will almost always be greater than the payment actually received by a merchant, due to fees charged by the PSE, card issuers, or other entities facilitating the transaction. The draft instructions explained what was meant by the term “payments.” If transaction settlement groups had viewed the draft instructions with the draft forms, their concerns may have been addressed earlier and they could have proceeded with greater confidence in designing their data collection processes. IRS officials acknowledged that some comments made on the forms could have been avoided if the instructions were available. According to IRS officials, releasing draft instructions with draft forms is usually not done because instructions are typically not complete by the time forms go out for comment. However, IRS officials said they have released draft instructions with forms on occasion and recognize the value in doing so. IRDM’s plans to use the new cost basis and transaction settlement reporting data rely upon new IT systems that are expected to automatically match information returns to tax returns. The plans also provide for a new organization and new workflows for business taxpayer compliance staff. The specific plans for electronically processing the new information return data were nearly complete, as of May 2011, according to a senior IRDM official. The initial round of IT enhancements is to be operational in 2012, utilizing tax year 2011 data, and over 400 full-time equivalent staff have been requested in IRS’s fiscal year 2012 budget to, among other things, transcribe new business tax return information and reconcile returns. Additional IT enhancements are planned for subsequent years. Eventually, all current and future information return data will go through the IT systems created for IRDM. (For additional details on planned implementation time frames, see app. IV, table 6.) The two existing programs that will be affected by IRDM are IRS’s Automated Underreporter program (AUR) and nonfiler programs. The existing AUR matches data on information returns and income reported by individual taxpayers only. A notable planned AUR improvement is the development of technology to match data from the Form 1099-K to business tax returns. The existing IRS nonfiler programs work individual taxpayer and business nonfiler cases. IRS recently implemented a project to modernize its business nonfiler compliance program and IRDM is developing plans to use and work with that project, according to a senior IRDM official. In particular, IRDM is assessing the feasibility of establishing a business version of the program IRS uses to estimate taxes owed, known as the Automated Substitute for Return program, and submit a return on behalf of individual nonfilers. A summary of the planned IRDM improvements is shown in table 3. The IRDM IT systems are also intended to overcome several limitations in IRS’s existing matching program, which will allow for better use of data, including Form 1099-B data. For example, IRDM is planning to update rules—criteria for selecting cases—based on prior case results and other data. These rules are important for IRS to target the cases with potential tax assessments. With the existing system, rules are difficult to update. Because this will be the first time IRS includes businesses in the document matching program, IRS must establish rules for businesses. IRDM is conducting research to establish an initial rule set for tax year 2011, according to a senior IRDM official. As IRS gains information on business cases, the rules are to be refined. Eventually, according to the senior official, they would like to use industry data on the usage of payment cards to profile and segment business tax returns for appropriate treatment. IRS also plans to develop new technology to help manage individual and business cases and, eventually, to contact business taxpayers automatically through notices. Additionally, IRDM is intended to enable monthly updates and storage of 10 years worth of information return data, thereby modernizing the existing reliance on files that cannot be updated frequently. IRS expects to accomplish this by using the Integrated Production Model (IPM) database to house the data that feed the matching processes. IPM is designed to serve as a central repository for compliance data. It includes taxpayer data from databases known as Master Files, which contain taxpayer and business account information. In addition to the new matching technology, IRDM’s planned changes will facilitate the use of data among compliance staff. Appendix IV, figure 3 shows an overview of the planned state for information return processing once IRDM is fully implemented. As of May 2011, IRS was developing some details of the plans to use the new data. For example, IRDM officials were determining how certain business taxpayer cases will be sent to, and worked, in IRS’s Large Business and International division. IRS intends for the individual AUR program to benefit from IRDM, but potential resource limitations could affect the individual AUR program. IRDM was developed under the assumption that the program cannot harm the operations and production of the current individual AUR program, but officials acknowledge some risks exist. For example, IRDM plans acknowledge a risk of personnel gaps in the individual AUR program if a large number of those staff are hired into the business matching program. IRDM plans also suggest that if funding is not received for fiscal year 2011, staff from the individual program may be diverted from their current work to help work in the business matching organization. IRDM considers the risk of not receiving 2011 funding to have a low probability of occurring and, if it does occur, IRDM predicts a moderate impact on schedule. As of May 2011, according to a senior IRDM official, IRS does not plan to realign individual AUR staff during fiscal year 2011, but a lack of funding will impact the number of test cases IRS can complete. IRS’s effective use of the new information return data to promote compliance, particularly in initial years and for business filers, will rely heavily on research to design the matching program, set initial case selection criteria, and to ensure that data feeding the IT systems are accurate. To design the data matching program, IRS is evaluating filing patterns of taxpayers and information return filers to determine when, and how often, matching can be performed, according to a senior IRDM official. To develop initial rules for selecting businesses to contact when the document matching program identifies discrepancies between Forms 1099-K and business tax returns, IRDM has conducted, and continues to conduct, research on how to best identify revenue-producing business taxpayer cases. Specifically, IRS completed a manual review of documents already filed by small corporations to estimate the volume, amount, and potential tax revenue that may be collected by contacting taxpayers about unreported income. After contacting taxpayers about income discrepancies, 21 percent of the cases resulted in a tax assessment. IRS is doing a follow-up study that will provide, among other things, additional information on business case tax revenue, taxpayer response rates to notices, and hours spent per case. This research will help establish a skeletal set of case selection criteria for 2011 data, according to a senior IRDM official. The results of this, and other research, will support additional details in IRDM’s planned use of the new data. When the new data arrive in 2012, IRDM plans call for data quality testing, prior to matching, on 2011 Form 1099-K data. Data quality testing could identify potential reporting errors which industry groups are concerned about. The testing, and mitigating adjustments based on any errors found, will be key to ensuring the long-term ability of IRDM’s IT systems to identify productive cases. At the end of each IRDM IT project milestone IRDM produces a lessons learned document, in accordance with IRS’s Enterprise Lifecycle Guidance, which requires a lessons learned report at the end of each life- cycle phase. Lessons learned can be useful tools for an organization to identify areas of improvement as well as ways to make those improvements. The IRDM lessons identified at the end of Milestone 2 detail eight problem areas and ways to prevent them in the future. IRDM does not include a plan for accountability, such as assignment of implementation responsibility and a periodic review of the lessons learned to ensure the improvements are implemented. For four of the Milestone 2 lessons, IRDM documented some actions to take to address each issue. IRDM did not document those individuals or offices responsible for implementing corrective measures or otherwise following up on the lesson for any of the documented lessons learned. For example, in response to challenges associated with assigning subject matter experts, the IRDM lessons learned document states that the program should keep resource reassignment to a minimum; however, there is no designation of who is accountable for implementation or time frame for when this solution will be followed up. IRS officials said they intend to follow up on lessons learned within the next milestone, and that each program office is responsible for ensuring that cited improvements are implemented. Without documentation of responsibilities and follow-up on lessons learned, program officials risk missing opportunities for improvement. IRDM planning documents list 31 preliminary performance measures for the program. IRDM has not yet committed to a final set of performance measures because, according to IRDM officials, they are determining how they will use the new information. Four of the measures are finalized. According to an IRDM plan, they expect to have some more finalized by August 2011, and others finalized by December 2011. A prior assessment we did of program implementation at IRS emphasizes the importance of developing evaluation plans prior to full project implementation in order to ensure that the data necessary for evaluation are collected in an efficient and timely way. Developing a written plan, including tasks to be completed, is an important step in assuring that necessary systems and resources are available for timely data collection. Although IRDM has identified dates on which to begin collecting performance measure data, officials did not provide a plan to develop and finalize the measures. If measures are not developed early, program managers run the risk that the necessary data for evaluation cannot be collected, which could limit the potential for meaningful performance management. Although developing measures early is important to most effectively utilize performance data, we recognize that measures may evolve over time and that the process to develop the measures may be challenging. The preliminary IRDM performance measures demonstrate two attributes of effective performance measures as identified in our prior work. For example, successful performance measures are linked with the agency’s goals and mission. The IRDM measures are linked to an IRDM strategy and outcome, as well as to IRS goals. Successful performance measures should also be designed, where appropriate, to meet a numerical goal and have an office or individual accountable for meeting that goal. Almost all of the IRDM measures are quantifiable and IRDM plans assign each measure to an organization that will be responsible for collecting and analyzing data, such as RAS. IRDM has not fully documented its preliminary performance measures, making it difficult to determine whether the measures meet other attributes of successful performance measures. IRS could further leverage IRDM performance measures by incorporating additional key attributes of successful performance measurement into IRDM plans. For example, the current list of measures does not contain definitions for each measure. One proposed performance measure is “taxpayer satisfaction for the Business Master File system,” but no details are provided on how taxpayer satisfaction will be gauged or used. It is unclear from this description what data IRS will be assessing and how the data will be interpreted. IRDM plans should clearly state the name and include a description of each measure that is consistent with the methodology that will be used to calculate it. IRDM planning documents do not explain how the preliminary measures were developed. Well-designed evaluation plans should be properly documented and consider the kind of information to be acquired, the sources of information, the methods to be used for sampling from data sources and for collecting information, the timing and frequency of information collection, and the basis for comparing outcomes. IRDM has a framework and process for how performance measures should be defined, how to describe scope, data sources, methodology, and data reliability. IRDM has implemented some elements of their plan for some of the preliminary measures. For example, six of the performance measures have documentation that includes methodology. However, IRDM does not identify how baseline data for any of the measures will be collected. In addition to measuring the outcomes of IRDM, performance data are needed to contribute to IRS’s planned efforts to measure whether cost basis and transaction settlement reporting increases revenue and voluntary compliance, and therefore decreases the tax gap. As of May 2011, IRDM officials have identified one preliminary performance measure to capture the effect of the legislation on revenue, and one preliminary measure of voluntary compliance. According to IRS officials, it will be challenging to isolate the effects of the legislation on both revenue and voluntary compliance and they have not yet determined how this will be done. In particular, as of December 2010, they noted the challenges of taking into account other factors that are not easily measured. For example, when attempting to measure the effect of the legislation on voluntary compliance, it may be difficult for researchers to account for a taxpayer who, for example, fails to accurately report capital gains from non-securities investments in an effort to offset reporting the capital gains identified on the new information returns. IRS officials have said RAS is working on how to capture changes in compliance behavior in response to the new information reporting requirements. The two new information returns have the potential to improve taxpayer compliance. The new IRDM program could enhance IRS’s ability to use these and other information returns and more precisely target resources for compliance, thereby reducing the tax gap. Opportunities for improvement exist in IRDM that could help the program achieve these goals. For example, documenting IRDM coordination with related servicewide projects can help prevent inefficiencies and duplicated efforts. In addition, reliable cost estimates can help ensure that funding levels match the program’s needs. Moreover, clearly documenting the assumptions and methodology for data used to inform planning decisions, such as form volume projections, can support reliable decision making. IRS and the cost basis and transaction settlement industries had just over 2 years to implement the reporting requirements, which made timely communication from IRS critical. Incomplete information about the regulations, forms, and guidance for the new requirements could adversely affect the quality of data provided by the industries and undermine efforts to identify noncompliance. IRS made a noteworthy effort to communicate with industry. However, IRS could adopt additional communication approaches. Performance management provides a means to evaluate program outcomes, identify improvement opportunities, and maintain accountability. Lessons learned, which are identified at the end of each IRDM milestone, provide ongoing opportunities to enhance the program. It is important that IRS document its plans to follow up on these lessons so that improvements are implemented. Further, to the extent possible, IRS should ensure that its performance measures for IRDM have the attributes of effective measures and that procedures to collect data are timely developed. A plan to establish and implement IRDM’s performance measures would allow IRDM to move forward with fully documenting the methodology and data sources needed to measure the impact of the IRDM program. To improve implementation of cost basis and transaction settlement reporting, we recommend that the Commissioner of Internal Revenue take the following seven actions: 1. Document in IRDM plans any coordination between IRDM and the Workforce of Tomorrow and Nonfiler Strategy projects. IRS should develop procedures or requirements to incorporate in IRDM planning documents the integration between IRDM and any other servicewide projects which could affect IRDM. 2. For future updates to MITS’s IRDM cost estimate, ensure that the revised estimate is developed in a manner that reflects the four characteristics of a reliable estimate discussed in this report. 3. Clearly document the assumptions and rationale for Form 1099 volume projections used in resource planning decisions, and consult with RAS when developing projections. 4. Work with Treasury to share with the public its plans and expected release dates for IRDM regulations and formal guidance. IRS could consider including information similar to what is posed on the Department of Transportation’s or the Financial Industry Regulatory Authority Web sites. IRS should also include other pertinent information regarding IRDM implementation, such as upcoming informal guidance, including FAQs, upcoming outreach, and description of the letter ruling process. 5. For future releases of new or significantly revised forms, whenever possible, release draft instructions to facilitate the most useful comments. 6. Document a plan to assign responsibility and establish a procedure to follow up on the lessons learned identified after each milestone phase. 7. Develop a plan to establish and implement IRDM performance measures. The plan should include documentation of the process and rationale for developing and using IRDM performance measures, including information such as the methodology, data sources, and targets, in order to establish that the performance measures have the necessary attributes of efficient measures. We provided a draft of this report to the Commissioner of Internal Revenue for his review and comment. We received written comments from the Deputy Commissioner for Services and Enforcement, which are reprinted in appendix V. IRS also provided us with technical comments, which we incorporated into the report as appropriate. IRS said it has taken actions consistent with our recommendations to improve its implementation plans. Of our seven recommendations, IRS explicitly agreed with three; without explicit agreement, described steps it is taking to address two; agreed in principle with another; and neither agreed nor disagreed with a final recommendation. IRS explicitly agreed with our recommendations regarding its cost estimate, form volume projections, and lessons learned. In agreement with the recommendation to ensure that a revised MITS IRDM cost estimate reflects GAO’s four characteristics of a reliable estimate, IRS said it intends to update the estimate in a manner consistent with the GAO Cost Estimating and Assessment Guide. In response to our recommendation to clearly document the assumptions and rationale for Form 1099 volume projections, IRS agreed that additional documentation for the 125 million projection would have been helpful. IRS said that RAS will provide updated estimates for use in decision making, and that it will continue to consult with RAS when developing and documenting projections. IRS also agreed to assign responsibility and establish a procedure to follow up on lessons learned. In its response, IRS said it has taken steps to improve lessons learned reports by assigning responsibility and due dates for each lesson, which will facilitate their periodic review. While not directly saying if it agreed with our recommendation on documenting coordination between IRDM and servicewide projects, IRS said it has taken steps to document coordination in the IRDM Strategic Roadmap. Similarly, in response to our performance measurement recommendation, IRS said that it will fine tune its current performance measurement plan by drafting definitions for IRDM’s performance measures and will include methodology, data sources, and targets to ensure all necessary attributes of performance measures are captured. IRS agreed in principle with our recommendation to, whenever possible, release draft instructions of new or significantly revised forms. Recognizing the value of obtaining feedback on draft instructions, IRS said that it strives to release draft instructions as quickly as possible, but needs to release forms early so that software developers and IRS technology specialists can begin programming activities. We agree it is not always possible to release draft instructions with new or revised forms, but doing so whenever possible can help stakeholders ensure that the data reported on such forms are appropriate and also help minimize the burden of developing systems to report data to IRS. IRS did not explicitly agree or disagree with our recommendation that it share, with the public, plans and expected release dates for IRDM outreach, regulations, and formal and informal guidance. IRS agreed that continuous engagement of stakeholders is important and highlighted that its Web site contains information reporting guidance, which IRS staff are available to discuss. However, IRS said that it cannot accurately predict release dates for formal guidance published in the Federal Register or Internal Revenue Bulletin. IRS counsel told us this was because Treasury reviews formal guidance, including regulations. IRS also said, as we note in our report, that IRDM guidance projects were listed in an annual Priority Guidance Plan published by IRS and Treasury. However, the plan only lists projects to be completed in the coming year, without more specific projected release dates and, for cost basis and transaction settlement reporting, the plan provides information on regulations but not guidance and outreach. We recognize that predicting release dates is difficult. In a Web-based environment IRS could both note this uncertainty and change estimated dates as necessary. Using the IRS Web site to post expected release dates for outreach, regulations, and guidance would help external stakeholders anticipate IRS actions and plan their implementation of the laws. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the Chairmen and Ranking Members of other Senate and House committees and subcommittees that have appropriation, authorization, and oversight responsibilities for IRS. We will also send copies to the Commissioner of Internal Revenue, the Secretary of the Treasury, the Chairman of the IRS Oversight Board, and the Director of the Office of Management and Budget. Copies are also available at no charge on the GAO Web site at http://www.gao.gov. If you or your staffs have any questions or wish to discuss the material in this report further, please contact me at (202) 512-9039 or brostekm@gao.gov. Contact points for our offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff members who made major contributions to this report are listed in appendix VI. To address the four o Information Reporting and Document Matching (IRDM) program because it is the program responsible for implementing cost basis and transaction settlement reporting. bjectives of this report, we focused on the To assess IRS’s implementation plans for the new requirements, we compared the Internal Revenue Service’s (IRS) plans, such as the IRDM Strategic Roadmap and the IRDM Program Management Plan, to criteria from prior GAO reports, the Internal Revenue Manual, and other sources. When possible, we looked for evidence of IRS following its plans, but we did not broadly evaluate whether these plans and actions are contributing to the program’s goals of increasing compliance. Because most components of IRDM were still being developed, we used dates in IRDM planning documents to gauge whether established time frames had been met and IRS was meeting planned time frames. To assess IRS’s cost estimates to implement the new requirements, we compared IRS cost estimates and budget plans for the implementation with GAO’s cost estimating criteria. To determine to what extent the estimate adheres to the characteristics of a high-quality cost estimate, we evaluated the Modernization and Information Technology Services (MITS) division’s IRDM life-cycle cost estimate to assess whether it met key characteristics identified in the GAO Cost Estimating and Assessment Guide. Our guide, which is based on extensive research of best practices for estimating program schedules and costs, indicates that a high-quality, valid, and reliable cost estimate should be well documented, comprehensive, accurate, and credible. We analyzed the cost estimating practices used by MITS against these best practices to determine whether the IRDM cost estimate is comprehensive, accurate, well-documented, and credible. We then characterized the extent to which each of these four characteristics of reliable cost estimates were met; that is, we rated each characteristic as being either: Met, Substantially Met, Partially Met, Minimally Met, or Not Met. To do so, we scored each of the individual key practices associated with cost and scheduling best practices on a scale of 1-5 (Does n 1, Minimally Meets = 2, Partially Meets = 3, Substantially Meets = 4, and Meets = 5), and then averaged the individual practice scores associated with a given best prac cost guide, the criteria against which we evaluated the program’s cost We estimates, as well as our preliminary findings with program officials. then discussed our preliminary assessment results with IRDM officials an cost estimators. When warranted, we updated our analyses based on the agency response and additional documentation provided to us. tice to determine the overall rating. We shared our To determine the extent to which IRS has issued timely regulations and guidance and undertaken outreach efforts, we interviewed IRS o the Office of Chief Counsel about the rulemaking process for both laws. We analyzed the timing of the regulations and communication from IRS relative to the enactment dates and effective dates of both laws. In o to identify key issues, we examined the comment letters IRS received i response to the proposed regulations for both laws. We also reviewe Stakeholder Management and Communication Plan, which is a plan er developed by IRDM to manage communication with industry and oth stakeholders. We met with representatives of the Information Reporting Program Advisory Committee (IRPAC), an IRS advisory group made up of tax professionals, as well as four private industry groups which represent ew companies that will be required to file information returns under the n cost basis and transaction settlement provisions; the Electronic Transactions Association (ETA), which represents the payment card industry and third-party payment networks; the Securities Industry and Financial Markets Association (SIFMA); the Financial Information Foru (FIF), which represents the financial technology industry; and the Investment Company Institute (ICI), which represents the mutual fund industry to discuss their communications with IRS and possible data quality issues. To examine how IRS will use the new returns to improve compliance, and the possible effects of the implementation, we examined IRS plans depicting the future information technology systems and IRDM business processes for using information returns in compliance efforts, and discussed the plans with IRS officials. To gauge whether IRS plans consider potential data accuracy issues, we compared IRDM plans for using the new data to GAO criteria for controlling data quality. To analyze IRS’s plans to assess the implementation process, we reviewed the existing lessons learned documentation. To determine IRS’s plans to assess program outcomes, we reviewed the preliminary performance measures found in documents such the IRDM Program Management Plan and the IRDM Strategic Roadmap. To the extent possible, we assessed thepreliminary measures against GAO’s performance measurement program evaluation criteria. We also interviewed IRS officials from the o Research Analysis and Statistics (RAS) division to identify efforts made t develop performance measures and measure the outcome of the program. For each objective, we shared our assessment criteria with IRS officials, who agreed with their relevance. We also interviewed IRS officials in the Small Business/Self Employed (SB/SE) division, MITS, and Forms and Publications. We gave IRS officials an interim briefing on some of the findings in this report. Our work was done mainly at IRS Headquarters in Washington, D.C. and its division office in New Carrollton, Maryland, where the officials responsible for implementing the information returns programs were located. To assess the reliability of the cost estimate data that we used to support findings in this report, we reviewed relevant program documentation, such as cost estimation spreadsheets and a report explaining the estimate, to substantiate evidence obtained through interviews with knowledgeable agency officials, where available. We found the data we used to be sufficiently reliable for the purposes of our report. We also made appropriate attribution indicating the sources of the data. We conducted this performance audit from June 2010 through May 20 accordance with standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and nce conclusions based on our audit objectives. We believe that the evide 11 in generally accepted government auditing standards. Those obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Table 4 identifies six changes to reporting requirements affecting both brokers and issuers of stock. These changes were highlighted in comment letters submitted by industry in response to proposed regulations. Prior to the legislation, brokers were required to provide some information, including gross sales of securities, to the Internal Revenue Service (IRS) on the Form 1099-B. The new legislation requires that in addition to this information, brokers are responsible for reporting adjusted cost basis information and whether a gain or loss is long term or short term. Major changes to reporting requirements can be categorized as either a tracking change or a calculation change. The tracking rules allow brokers to track events affecting the basis amount of a security over the period of ownership and to pass that information among other brokers, IRS, and taxpayers. The calculation rules instruct brokers on which of the various methodologies should be used to calculate basis and when to take into account other tax rules that affect basis. We assessed the Modernization and Information Technology Service (MITS) group’s Information Reporting and Document Matching (IRDM) program cost estimate to determine the extent to which it meets best practices established by the GAO Cost Estimating and Assessment Guide. We found that the cost estimate meets one, substantially meets three, partially meets nine, minimally meets four, and does not meet two best practices. Table 5 shows the extent to which the MITS IRDM Cost Estimate meets practices. Tax returns are submitted to IRS, and data from the forms are trans to the appropriate account master file. A master file contains tax data related information pertaining to certain forms or taxpayers; the IMF contains data on individuals, the BMF contains data on business income taxpayers. Information returns are submitted to IRS through the Web- based Filing Information Returns Electronically system and, eventually, transmitted to the IRMF. This process will be same as before IRDM was implemented. Master file data are then consolidated into the IPM database. Tax and information return data go through the assimilation process, which does basic checks on the forms to identify basic errors such as blank boxes on returns, or invalid Taxpayer Identification Numbers (TIN). If a TIN is determined to be incorrect, IRS contacts the payer who must check the TIN with records and attempt to correct the information return, which may include notifying the taxpayer. If the issue is not resolved, the payer must begin backup withholding. Next, correlation—or matching—is done to identify discrepancies between the BMF or IMF, and IRMF data. Under IRDM, for the first time, IRS will be matching BMF data to the IRMF. The matching results in a first list of potential cases, that are grouped as underreporters or nonfilers. For each potential case, a revenue estimate for the case is calculated, and the master files are updated to indicate a mismatch. The potential case list is further refined when statistical software and criteria for selecting the cases with the postrevenue potential are applied. IRDM will allow for more frequent updates of these criteria and for information from prior cases to inform the case selection process, which results in a final case list. Certain cases are sent to Large Business and International division, Examination, or other functions. Cases are batched and given to tax examiners for a manual review, and master files are updated. Based on the review, the taxpayer may receive notices from IRS asking for explanations of discrepancies between income reported on their tax return and the information return. In initial years of IRDM, notices for business taxpayers will be generated by a tax examiner or other staff; eventually those notices will be generated automatically based on a tax examiner’s case review. Depending on the taxpayer’s response, a case could be resolved. In addition to the contact named above, Libby Mixon, Assistant Director; Laurel Ball; Mary Coyle; Jennifer Echard; Ioan Ifrim; Donna Miller; Cynthia Saunders; Stacey Steele; A.J. Stephens; and Lindsay Welter made key contributions to this report.
Effective implementation of two 2008 laws by the Internal Revenue Service (IRS) could increase taxpayers' voluntary compliance. Those laws require reporting to IRS and taxpayers of cost basis for sales of certain securities and of transaction settlement information (i.e., merchants' income from payment cards or third party networks). In response to a congressional request, GAO (1) assessed IRS's implementation plans for the laws; (2) determined the extent to which IRS issued timely regulations and guidance and did outreach; (3) examined how IRS will use the new data to improve compliance; and (4) analyzed IRS's plans to assess implementation and measure performance and outcomes. GAO compared IRS's implementation plans to criteria in past GAO work and other sources; interviewed industry groups and agency officials, and reviewed rulemaking documents; examined IRS's plans to use the new data; and compared IRS's measures and evaluation plans to GAO criteria. IRS is implementing cost basis and transaction settlement reporting through the new Information Reporting and Document Matching (IRDM) program in the Small Business/Self Employed (SB/SE) and Modernization and Information Technology Services (MITS) divisions. IRDM plans show several elements of effective program management, but do not document coordination with some related IRS projects such as Workforce of Tomorrow. IRS estimated IRDM costs, but MITS's estimate does not reflect some best practices, such as adjusting for inflation. Also, IRDM did not use substantiated tax form volume projections in some budget and risk decisions. To date, IRS spent about $28 million on IRDM and requested another approximately $82 million. IRS outreach with industry stakeholders was thorough early in the rulemaking process, but IRS missed its target dates for issuing regulations by about 1 year due, according to IRS officials, to time needed to learn the complex industries. After IRS released final regulations, industry stakeholders sought clarification of certain issues. IRS did not release additional written guidance until after the regulations' effective dates, which industry stakeholders said may affect their implementation of the new reporting requirements. Although IRS released drafts of the newly required or revised forms, they did not release draft instructions prior to the regulations' effective dates. To use the new data, IRS is developing systems that are expected to improve IRS's existing matching of information returns to individual tax returns and expand matching to business taxpayers. The initial enhancements are to be operational in 2012. IRDM appropriately plans to conduct research and test data quality. IRDM regularly documents lessons learned; however, IRDM has not assigned responsibility or established procedures to use them. IRDM also developed preliminary performance measures to assess the implementation and outcomes, including effects on revenue and compliance. However, IRDM has not documented a plan to finalize the performance measures, such as methodology. GAO recommends, among other things, that IRS improve cost estimation, form volume projections, stakeholder communication, and performance management. IRS generally agreed with the recommendations, but did not describe plans to release draft form instructions or communicate target guidance release dates, both of which would aid industry implementation.
Under PPACA, health-care marketplaces were intended to provide a single point of access for individuals to enroll in private health plans, apply for income-based subsidies to offset the cost of these plans— which, as noted, are not paid directly to enrollees, but instead are paid to health-insurance issuers—and, as applicable, obtain an eligibility determination or assessment of eligibility for other health-coverage programs. These other programs include Medicaid and the Children’s Health Insurance Program. CMS, a unit of HHS, is responsible for overseeing the establishment of these online marketplaces, and the agency maintains the federal Marketplace. To be eligible to enroll in a “qualified health plan” offered through a marketplace—that is, one providing essential health benefits and meeting other requirements under PPACA—an individual must be a U.S. citizen or national, or otherwise be lawfully present in the United States; reside in the marketplace service area; and not be incarcerated (unless incarcerated while awaiting disposition of charges). To be eligible for Medicaid, individuals must meet federal requirements regarding residency, U.S. citizenship or immigration status, and income limits, as well as any additional state-specific criteria that may apply. When applying for coverage, individuals report family size and the amount of projected income. Based, in part, on that information, the Marketplace will calculate the maximum allowable amount of advance premium tax credit. An applicant can then decide if he or she wants all, some, or none of the estimated credit paid in advance, in the form of payment to the applicant’s insurer that reduces the applicant’s monthly premium payment. Marketplaces are required by PPACA to verify application information to determine eligibility for enrollment and, if applicable, determine eligibility for the income-based subsidies or Medicaid. These verification steps include validating an applicant’s Social Security number, if one is provided; verifying citizenship, status as a U.S. national, or lawful presence by comparison with Social Security Administration or Department of Homeland Security records; and verifying household income and family size by comparison with tax-return data from the Internal Revenue Service, as well as data on Social Security benefits from the Social Security Administration. PPACA requires that consumer-submitted information be verified, and that determinations of eligibility be made, through either an electronic verification system or another method approved by HHS. To implement this verification process, CMS developed the data services hub, which acts as a portal for exchanging information between the federal Marketplace, state-based marketplaces, and Medicaid agencies, among other entities, and CMS’s external partners, including other federal agencies. The Marketplace uses the data services hub in an attempt to verify that applicant information necessary to support an eligibility determination is consistent with external data sources. In February 2016, we issued a report addressing CMS enrollment controls and the agency’s management of enrollment fraud risk for the federal Marketplace. Based on our 2014 undercover testing for qualified health plans and related work, this report included eight recommendations to HHS to strengthen oversight of the federal Marketplace. HHS concurred with our recommendations; however, it is too early to determine whether HHS will fully address the issues we identified. Our recommendations addressed issues also relevant to our 2015 testing described in this report, including studying changes to improve eligibility determinations and the data services hub process; tracking the value of subsidies terminated or adjusted for failure to resolve application inconsistencies; implementing procedures for resolving Social Security number inconsistencies; and conducting a comprehensive fraud risk assessment of the potential for fraud in the process for applying for qualified health plans through the federal Marketplace. Our undercover testing for the 2015 coverage year found that the health- care marketplace eligibility determination and enrollment process for qualified health plans remains vulnerable to fraud. As shown in figure 1, the federal Marketplace or selected state marketplaces approved each of our 10 fictitious applications for subsidized qualified health plans. We subsequently paid premiums to put these policies into force. As figure 1 shows, for these 10 applications, we were approved for subsidized coverage—the premium tax credit, paid in advance, and cost- sharing reduction subsidies—for all cases. The monthly amount of the advance premium tax credit for these 10 applicants totaled approximately $2,300 per month, or about $28,000 annually, equal to about 70 percent of total premiums. For 4 of these applications, we used Social Security numbers that could not have been issued by the Social Security Administration. For 4 other applications, we said our fictitious applicants worked at a company—which we also created—that offered health insurance, but the coverage did not provide required minimum essential coverage under PPACA. For the final 2 applications, we used an identity from our prior undercover testing of the federal Marketplace to apply for coverage concurrently at two state marketplaces. Thus, this fictitious applicant received subsidized qualified health-plan coverage from the federal Marketplace and the two selected state marketplaces at the same time. For 8 applications among this group of 10, we failed to clear an identity- checking step during the “front end” of the application process, and thus could not complete the process. In these cases, we were directed to contact a contractor that handles identity checking. The contractor was unable to resolve the identity issues and directed us to call the appropriate marketplace. We proceeded to phone the marketplaces and our applications were subsequently approved. The other two applicants were accepted by phone. For each of the 10 fictitious applications where we obtained qualified health-plan coverage, the respective marketplace directed that our applicants submit supplementary documentation. The marketplaces are required to seek postapproval documentation in the case of certain application “inconsistencies”—instances in which information an applicant has provided does not match information contained in data sources that the marketplace uses for eligibility verification at the time of application, or such information is not available. If there is an application inconsistency, the marketplace is to determine eligibility using the applicant’s attestations and ensure that subsidies are provided on behalf of the applicant, if qualified to receive them, while the inconsistency is being resolved using “back-end” controls. Under these controls, applicants will be asked to provide additional information or documentation for the marketplaces to review in order to resolve the inconsistency. As part of our testing, and to respond to the marketplace directives, we provided counterfeit follow-up documentation, such as fictitious Social Security cards with impossible Social Security numbers, for all 10 undercover applications. For all 10 of these fictitious applications, we maintained subsidized coverage beyond the period during which applicants may file supporting documentation to resolve inconsistencies. In one case, the Kentucky marketplace questioned the validity of the Social Security number our applicant provided, which was an impossible Social Security number. In fact, the marketplace told us the Social Security Administration reported that the number was not valid. Nevertheless, the Kentucky marketplace notified our fictitious applicant that the applicant was found eligible for coverage. For the four fictitious applicants who claimed their employer did not provide minimum essential coverage, the marketplace did not contact our fictitious employer to confirm the applicant’s account that the company offers only substandard coverage. In August 2015, we briefed CMS, California, and Kentucky officials on the results of our undercover testing, to obtain their views. According to these officials, the marketplaces only inspect for documents that have obviously been altered. Thus, if the documentation submitted does not appear to have any obvious alterations, it would not be questioned for authenticity. In addition, according to Kentucky officials, in the case of the impossible Social Security number, the identity-proofing process functioned correctly, but a marketplace worker bypassed identity-proofing steps that would have required a manual verification of the fictitious Social Security card we submitted. The officials told us they plan to provide training on how to conduct manual verifications to prevent this in the future. Further, California officials told us in June 2016 that the marketplace is upgrading its system in an effort to prevent use of impossible Social Security numbers. In the case of applicant identity verification in particular, Covered California officials told us they believed it was likely our applicants had their identities confirmed because they ultimately submitted paper applications, signed under penalty of perjury. That attestation satisfied identity verification requirements, the officials said. As for our employer-sponsored coverage testing, CMS and California officials told us that during the 2015 enrollment period, the marketplaces accepted applicants’ attestation on lack of minimum essential coverage. As a result, the marketplaces were not required to communicate with the applicant’s employer to confirm whether the attestation is valid. In June 2016, California officials further told us the marketplace is updating its application process to provide tools to consumers to help them determine whether their employer-sponsored insurance meets minimum essential coverage standards. They also told us the marketplace is updating policies and procedures for sending notices to employers and developing longer-term plans for an automated system to send notices to employers. Kentucky officials told us after our 2015 testing that applicant-provided information is entered into its system to determine whether the applicant’s claimed plan meets minimum essential coverage standards. If an applicant receives a qualified health-plan subsidy because the applicant’s employer-sponsored plan does not meet the guidelines, the Kentucky marketplace sends a notice to the employer asking it to verify the applicant information. The officials told us the employer letter details, among other things, the applicant-provided information and minimum essential coverage standards. However, our fictitious company did not receive such notification. After our 2015 testing, CMS, California, and Kentucky officials also told us there was no process to identify individuals with multiple enrollments through different marketplaces. California officials noted in June 2016 that the federal government has not made data available that would allow California to identify duplicate enrollments through different marketplaces. CMS officials told us it was unlikely an individual would seek to obtain subsidized qualified health-plan coverage in multiple states. We conducted this portion of our testing, however, to evaluate whether such a situation, such as a stolen identity, would be possible. CMS officials told us the agency would need to look at the risk associated with multiple coverages. Kentucky officials told us that in response to our 2015 findings, call-center staff were retrained on identity-proofing processes, and that they are improving training for other staff as well. They also said they plan to make changes before the next open-enrollment period so that call-center representatives cannot bypass identity-proofing steps, as occurred with our applications. Further, they said they plan to improve the process for handling of applications where employer-sponsored coverage is at issue. Also in response to our findings, California officials said they are developing process improvements and system modifications to address the issues we raised. Finally, in the case of the federal Marketplace in particular, for which, as noted, we conducted undercover testing previously, we asked CMS officials for their views on our second-year results compared to the first year. They told us the eligibility and enrollment system is generally performing as designed. According to the officials, a key feature of the system, when applicant information cannot immediately be verified, is whether proper inconsistencies are generated. This is important so that such inconsistencies can be addressed later, after eligibility is granted at time of application. CMS officials noted to us in June 2016 that PPACA and federal regulations provide for instances when an individual who is otherwise eligible can receive coverage while an inconsistency is being resolved. CMS officials told us the overall approach is that CMS must balance consumers’ ability to effectively and efficiently select Marketplace coverage with program-integrity concerns. For our additional eight fictitious applications for Medicaid coverage in 2015, we were approved for subsidized health-care coverage in seven of the eight applications. As shown in figure 2, for three of the eight applications, we were approved for Medicaid, as originally sought. For four of the eight applications, we did not obtain Medicaid approval, but instead were subsequently approved for subsidized qualified health-plan coverage. The monthly amount of the advance premium tax credit for these four applicants totaled approximately $1,100 per month, or about $13,000 annually. For one of the eight applications, we could not obtain Medicaid coverage because we declined to provide a Social Security number. As with our applications for qualified health plans described earlier, we also failed to clear the initial identity-checking step for six of eight Medicaid applications. In these cases, we were likewise directed to contact a contractor that handles identity checking. The contractor was unable to resolve the identity issues and directed us to call the appropriate marketplace. We proceeded to phone the marketplaces. However, as shown in figure 2, the California marketplace did not continue to process one of our Medicaid applications. In this case, our fictitious phone applicant declined to provide what was a valid Social Security number, citing privacy concerns. A marketplace representative told us that, to apply, the applicant must provide a Social Security number. The representative suggested that as an alternative, we could apply for Medicaid in person with the local county office or a certified enrollment counselor. After we discussed the results of our undercover testing with California officials in 2015, they told us their system requires applicants to provide either a Social Security number or an individual taxpayer-identification number to process an application. As a result, because our fictitious applicant declined to provide a Social Security number, our application could not be processed. For the four fictitious Medicaid applications submitted to the federal Marketplace for 2015, we were told that we may be eligible for Medicaid but that the respective Medicaid state offices might require more information. For three of the four applications, federal Marketplace representatives told us we would be contacted by the Medicaid state offices within 30 days. However, the Medicaid offices did not notify us within 30 days for any of the applications. As a result, we subsequently contacted the state Medicaid offices and the federal Marketplace to follow up on the status of our applications. For the two New Jersey Medicaid applications, we periodically called the state Medicaid offices over approximately 4 months in 2015, attempting to determine the status of our applications. In these calls, New Jersey representatives generally told us they had not yet received Medicaid information from the federal Marketplace and, on several occasions, said they expected to receive it shortly. After our calls to New Jersey Medicaid offices, we phoned the federal Marketplace to determine the status of our Medicaid applications. In one case, the federal Marketplace representative told us New Jersey determined that our applicant did not qualify for Medicaid. As a result, the phone representative stated that we were then eligible for qualified health-plan coverage. We subsequently applied for coverage and were approved for an advance premium tax credit plus the cost- sharing reduction subsidy. In the other case, the federal Marketplace representative told us the Marketplace system did not indicate whether New Jersey received the application or processed it. The representative advised we phone the New Jersey Medicaid agency. Later on that same day, we phoned the federal Marketplace again and falsely claimed that the New Jersey Medicaid office denied our Medicaid application. Based on this claim, the representative said we were eligible for qualified health-plan coverage. We subsequently applied for coverage and were approved for an advance premium tax credit plus the cost-sharing reduction subsidy. The federal Marketplace did not ask us to submit documentation substantiating our Medicaid denial from New Jersey. In July and August 2015, we offered to meet with New Jersey Medicaid officials to discuss the results of our testing, but they declined our offer. CMS officials told us at the time that New Jersey had system issues that may have accounted for problems in our Medicaid application information being sent to the state. CMS officials told us that this system issue is now resolved. In addition, CMS officials told us they do not require proof of a Medicaid denial when processing qualified health-plan applications; nor does the federal Marketplace verify the Medicaid denial with the state. CMS officials said that, instead, they accept the applicant’s attestation that the applicant was denied Medicaid coverage. For our North Dakota Medicaid application in which we did not provide a Social Security number but did provide an impossible immigration document number, we called the North Dakota Medicaid agency to determine the status of our application. An agency representative told us the federal Marketplace denied our Medicaid application and therefore did not forward the Medicaid application file to North Dakota for a Medicaid eligibility determination. We did not receive notification of denial from the federal Marketplace. Subsequently, we called the federal Marketplace and applied for subsidized qualified health-plan coverage. The federal Marketplace approved the application, granting an advance premium tax credit plus the cost-sharing reduction subsidy. Because we did not disclose the specific identities of our fictitious applicants, CMS officials could not explain why the federal Marketplace originally said our application may be eligible for Medicaid but subsequently notified North Dakota that it was denied. For the North Dakota Medicaid application for which we did not provide a valid Social Security identity, we received a letter from the state Medicaid agency about a month after we applied through the federal Marketplace. The letter requested that we provide documentation to prove citizenship, such as a birth certificate. In addition, it requested a Social Security card and income documentation. We submitted the requested documentation, such as a fictitious birth certificate and Social Security card. The North Dakota Medicaid agency subsequently approved our Medicaid application and enrolled us in a Medicaid plan. After our undercover testing in 2015, we briefed North Dakota Medicaid officials and obtained their views. They told us the agency likely approved the Medicaid application because our fake Social Security card would have cleared the Social Security number inconsistency. The officials told us they accept documentation that appears authentic. They also said the agency is planning to implement a new system to help identify when applicant-reported information does not match Social Security Administration records. As with our applications for coverage under qualified health plans, described earlier, the state marketplace for Kentucky directed two of our Medicaid applicants to submit supplementary documentation. As part of our 2015 testing and in response to such requests, we provided counterfeit follow-up documentation, such as a fake immigration card with an impossible numbering scheme, for these applicants. The results of the documentation submission are as follows: For the application where the fictitious identity did not match Social Security records, the Kentucky agency approved our application for Medicaid coverage. In our discussions with Kentucky officials, they told us they accept documentation submitted—for example, copies of Social Security cards—unless there are obvious alterations. For the Medicaid application without a Social Security number and with an impossible immigration number, the Kentucky state agency denied our Medicaid application. A Kentucky representative told us the reason for the denial was that our fictitious applicant had not been a resident for 5 years, according to our fictitious immigration card. The representative told us we were eligible for qualified health-plan coverage. We applied for such coverage and were approved for an advance premium tax credit and the cost-sharing reduction subsidy. In later discussions with Kentucky officials, they told us the representative made use of an override capability, likely based on what the officials described as a history of inaccurate applicant immigration status information for a refugee population. Kentucky officials also said their staff accept documentation submitted unless there are obvious alterations, and thus are not trained to identify impossible immigration numbers. Finally, Kentucky officials said they would like to have a contact at the Department of Homeland Security with whom they can work to resolve immigration-related inconsistencies, similar to a contact that they have at the Social Security Administration to resolve Social Security–related inconsistencies. By contrast, during the Medicaid application process for one applicant, California did not direct that we submit any documentation. In this case, our fictitious applicant was approved over the phone even though the fictitious identity did not match Social Security records. We shared this result with California officials, who said they could not comment on the specifics of our case without knowing details of our undercover application. We provided a draft of this report to HHS, the California Department of Health Care Services, Covered California, the Kentucky Department for Medicaid Services, the Kentucky Health Benefit Exchange, and the North Dakota Department of Human Services. HHS and Covered California provided written comments, reproduced in appendixes II and III. HHS said it is committed to verifying eligibility of consumers who apply for health coverage through the federal Marketplace. The agency is continuing to make improvements to strengthen program integrity and Marketplace controls, HHS said. The Marketplace will continue to end coverage or adjust advance premium tax credit or cost-sharing reduction subsidies for failure to provide satisfactory documentation, HHS said. Covered California said it is committed to improving its processes with lessons learned from results of our undercover testing. Covered California said it takes vulnerabilities to fraud seriously and stressed the importance of effective fraud risk management, including an emphasis on consumer protection. HHS and Covered California also provided us with technical comments, which we have incorporated, as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Secretary of Health and Human Services, the Acting Administrator of CMS, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-6722 or bagdoyans@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. The objectives of this report, which concludes work we initially presented in a testimony in October 2015, are to describe for the 2015 coverage year (1) results of undercover attempts to obtain qualified health-plan coverage from the federal Health Insurance Marketplace (Marketplace) and selected state marketplaces under the Patient Protection and Affordable Care Act (PPACA), for the act’s second open-enrollment period, for 2015 coverage; and (2) results of undercover attempts to obtain Medicaid coverage through the federal Marketplace and selected state marketplaces. For both objectives, to perform our undercover testing of the federal and selected state eligibility and enrollment processes for the 2015 coverage year, we created 18 fictitious identities for the purpose of making applications for health-care coverage by telephone and online. The undercover results, while illustrative, cannot be generalized to the full population of enrollees. For all 18 fictitious applications, we used publicly available information to construct our scenarios. We also used publicly available hardware, software, and materials to produce counterfeit or fictitious documents, which we submitted, as appropriate for our testing, when instructed to do so. We then observed the outcomes of the document submissions, such as any approvals received or requests to provide additional supporting documentation. Because the federal government, at the time of our review, operated a marketplace on behalf of the state in about two-thirds of the states, we focused part of our work on two states using the federal Marketplace— New Jersey and North Dakota. We chose these two states because they had expanded Medicaid eligibility and also delegated their Medicaid eligibility determinations to the federal Marketplace at the time of our testing. In addition, we chose two state marketplaces, California and Kentucky, for our undercover testing. We chose these two states based on factors including Medicaid expansion; population size (selection of California allowed inclusion of a significant portion of all state-based marketplace activity); differences in population (California is about nine times as populous as Kentucky); and progress made in reducing the percentage of uninsured residents. Our testing included only applications through a marketplace and did not include, for example, applications for Medicaid made directly to a state Medicaid agency. For our first objective, we used 10 applicant scenarios to test controls for verifications related to qualified health-plan coverage. Specifically, we created application scenarios with fictitious applicants claiming to have impossible Social Security numbers; claiming to be working for an employer that offers health insurance, but not coverage that meets “minimum essential” standards; or already having existing qualified health-plan coverage. We made 4 of these 10 applications online and the other 6 applications by phone. In these tests, we also stated income at a level eligible to obtain both types of income-based subsidies available under PPACA—a premium tax credit, to be paid in advance, and cost- sharing reduction. For our second objective, we used 8 additional applicant scenarios to test controls for verifications related to Medicaid coverage. Specifically, our fictitious applicants provided invalid Social Security identities, where their information did not match Social Security Administration records, or claimed they were noncitizens lawfully present in the United States and declined to provide Social Security numbers. In situations where we were asked to provide immigration document numbers, we provided impossible immigration document numbers. We made half of these applications online and half by phone. In these tests, we also stated income at a level eligible to qualify for coverage under the Medicaid expansion, where the federal government is responsible for reimbursing the states for 100 percent of the Medicaid costs in 2015. In cases where we did not obtain approval for Medicaid, we instead attempted, as appropriate, to obtain coverage for subsidized qualified health plans in the same manner as described earlier. To protect our undercover identities, we did not provide the marketplaces with specific applicant identity information. CMS and selected state officials generally told us that without such information, they could not fully research handling of our applicants. We created our applicant scenarios without knowledge of specific control procedures, if any, that CMS or other federal agencies may use in accepting or processing applications. We thus did not create the scenarios with intent to focus on a particular control or procedure. Overall, our review covered the act’s second open-enrollment period, for 2015 coverage, as well as follow-on work after close of the open- enrollment period. We shared details of our work with CMS and the selected state marketplaces. We had additional discussions with federal and state marketplace officials in June 2016. For both objectives, we also reviewed statutes, regulations, and other policy and related information. We conducted this performance audit from November 2014 to September 2016 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. We conducted our related investigative work in accordance with investigative standards prescribed by the Council of the Inspectors General on Integrity and Efficiency. In addition to the contact named above, Matthew Valenta, Philip Reiff, and Gary Bianchi, Assistant Directors; Maurice Belding, Jr.; Mariana Calderón; Ranya Elias; Colin Fallon; Suellen Foth; Maria McMullen; James Murphy; George Ogilvie; Ramon Rodriguez; Christopher H. Schmitt; Julie Spetz; and Elizabeth Wood made key contributions to this report.
PPACA provides for the establishment of health-insurance marketplaces where consumers can, among other things, select private health-insurance plans or apply for Medicaid. The act requires verification of applicant information to determine enrollment or subsidy eligibility. In addition, PPACA provided for the expansion of the Medicaid program. GAO was asked to examine enrollment and verification controls for the marketplaces. This report, which follows earlier testimony, provides final results of GAO testing and describes (1) undercover attempts to obtain health-plan coverage from the federal Marketplace and selected state marketplaces for 2015, and (2) undercover attempts to obtain Medicaid coverage through the federal Marketplace and the selected state marketplaces. GAO submitted, or attempted to submit, 18 fictitious applications by telephone and online. Ten applications tested controls related to obtaining subsidized coverage available through the federal Marketplace in New Jersey and North Dakota, and through state marketplaces in California and Kentucky. GAO chose these states based partly on range of population and whether the state had expanded Medicaid eligibility under PPACA. The other 8 applications tested controls for determining Medicaid eligibility. The results, while illustrative, cannot be generalized. GAO discussed results with CMS and state officials to obtain their views. The states identified several actions being taken in response to GAO's findings. Under the Patient Protection and Affordable Care Act (PPACA), health-insurance marketplaces are required to verify application information to determine eligibility for enrollment and, if applicable, determine eligibility for income-based subsidies or Medicaid. Verification steps include reviewing and validating an applicant's Social Security number, if one is provided; citizenship, status as a U.S. national, or lawful presence; and household income and family size. GAO's undercover testing for the 2015 coverage year found that the health-care marketplace eligibility determination and enrollment process for qualified health plans—that is, coverage obtained from private insurers—remains vulnerable to fraud. The federal Health Insurance Marketplace (Marketplace) or selected state marketplaces approved each of 10 fictitious applications GAO made for subsidized health plans. Although 8 of these 10 fictitious applications failed the initial online identity-checking process, all 10 were subsequently approved. Four applications used Social Security numbers that, according to the Social Security Administration, have never been issued, such as numbers starting with “000.” Other applicants obtained duplicate enrollment or obtained coverage by claiming that their employer did not provide insurance that met minimum essential coverage. For eight additional fictitious applications, initially made for Medicaid coverage, GAO was approved for subsidized health-care coverage in seven of the eight cases, through the federal Marketplace and the two selected state marketplaces. Three of GAO's applications were approved for Medicaid, which was the health-care program for which GAO originally sought approval. In each case, GAO provided identity information that would not have matched Social Security Administration records. For two applications, the marketplace or state Medicaid agency directed the fictitious applicants to submit supporting documents, which GAO did (such as a fake immigration card), and the applications were approved. For the third, the marketplace did not seek supporting documentation, and the application was approved by phone. For four, GAO was unable to obtain approval for Medicaid but was subsequently able to gain approval of subsidized health-plan coverage. In one case, GAO falsely claimed that it was denied Medicaid and was able to obtain the subsidized health plan when in fact no Medicaid determination had been made at that time. For one, GAO was unable to enroll into Medicaid, in California, because GAO declined to provide a Social Security number. According to California officials, the state marketplace requires a Social Security number or taxpayer-identification number to process applications. For both sets of testing, GAO submitted fictitious documentation as part of the application and enrollment process. According to officials from the Centers for Medicaid & Medicare Services (CMS), California, Kentucky, and North Dakota, the marketplace or Medicaid office only inspect for supporting documentation that has obviously been altered. Thus, if the documentation submitted does not show such signs, it would not be questioned for authenticity.
As part of a Navy-wide infrastructure cost reduction initiative, the Navy is restructuring its shore establishment by consolidating installation management functions in areas where significant concentrations of Navy activities exist, such as San Diego, California, Jacksonville, Florida, and—for purposes of this report—the northeastern area of the United States. This initiative seeks to reduce management and support redundancies and duplications of effort and to eliminate unnecessary overhead. In doing so, a single commander is given responsibility for the management and oversight of naval shore installations within a specific geographic region. Other responsibilities will include providing base support services to Navy operating forces and other naval activities and tenant commands, as well as managing the funding associated with these services. According to officials at NSB New London, total base support funding for the Northeast region is estimated to be between $165 million and $185 million in fiscal year 1999. Creation of a separate command to manage and oversee base support functions at Navy shore installations is expected to provide a more dedicated and expanded regionwide focus on those activities in an effort to reduce overhead costs and achieve increased efficiencies totaling millions of dollars. The establishment of the Northeast command will complete a total of 13 regional naval coordinators worldwide. In recommending the establishment of the new command, CINCLANTFLT is seeking to relieve the Commander, Submarine Group Two, an operational commander at NSB New London, of the nonoperational duties associated with the regional coordinator role. Establishing a separate command headed by a flag rank officer (admiral) to oversee northeastern shore installations would be consistent with other CINCLANTFLT regional commands that exist in Norfolk, Virginia, and Jacksonville, Florida. According to Navy officials, these regional commands will support Navy efforts to eliminate redundant management structures, reduce infrastructure costs, and foster regional service delivery of installation management support. CINCLANTFLT officials estimated that the staff of the command would consist of a flag rank commanding officer, 27 other military personnel, and 27 civilian employees. The existing regional coordination staff at NSB New London consists of 9 military and 15 civilian personnel. CINCLANTFLT’s recommendation to establish the new command at NWS Earle is pending approval by the Chief of Naval Operations and the Secretary of the Navy. In reviewing CINCLANTFLT’s recommendation of NWS Earle for the new command headquarters, we could not be certain to what extent the Navy had fully considered its stated criteria to evaluate or compare alternate sites because documentation to support the Navy’s decision was limited. Additionally, costs associated with relocating regional coordination functions and staff from NSB New London to NWS Earle and operating from that site may be greater than those estimated by the Navy. Navy Instruction 5450.169D, regarding the establishment, disestablishment, or modification of Navy shore activities, states that several factors should be considered, including whether (1) an activity is currently performing the mission or an existing activity in the same geographical area can assume the mission, (2) an existing activity of the same type can perform the mission, and (3) the need for the activity is sufficient to offset the cost of establishing a separate activity. Additionally, between October 1997 and March 1998, the Navy stated in correspondence with senators and congressmen from Connecticut and Rhode Island that several factors were being considered in selecting a location for the command. These factors included the availability of office space, communications, and suitable family housing; proximity to the regional offices of other federal government agencies; access to transportation; operational and military support; relocation and alteration costs; and rent costs. Navy officials told us that they considered the criteria stated in the Navy instruction and in their congressional correspondence in evaluating and comparing alternate sites. However, we are concerned as to the extent of this analysis. While Navy guidance does not specifically direct the preparation of cost comparisons for prospective sites, it does suggest that the Navy seek economy and efficiency in establishing new activities, which would suggest the need to compare costs among prospective sites. CINCLANTFLT officials told us that the site selection process began with their gathering some estimated cost data for prospective sites with the intent of performing a cost comparison. However, they were informed early in the process that CINCLANTFLT had already decided to locate the new command at NWS Earle because that was the desired location. Consequently, according to these officials, no further data were developed to estimate and compare the costs associated with establishing the command at sites other than NWS Earle. Our review of available documentation and discussions with Navy officials indicate that CINCLANTFLT’s recommendation to establish the Commander, Navy Region Northeast, at NWS Earle was based primarily on placing the command in closer proximity to New York City. CINCLANTFLT’s decision paper, referred to as a Fact and Justification Sheet, cited a number of needs and benefits of such a placement, focusing primarily on the need for Navy flag rank representation in the New York-New Jersey area. Specifically, the justification highlighted activities such as the importance of acting as the resident Navy spokesperson; interacting on the Navy’s behalf with major corporations, labor unions, and other organizations associated with maritime commerce; and serving as the Navy’s official representative for major events such as visiting foreign dignitaries. CINCLANTFLT did perform analyses sufficient to estimate the cost to establish the command at NWS Earle at $1.89 million. We did not, however, independently verify these cost estimates. CINCLANTFLT’s analyses included cost estimates for renovation of flag and officer office space, displacement of the current occupants of this office space; moving office furniture, supplies, and equipment; civilian and military permanent change of station costs; civilian severance pay for those who do not relocate; and a recurring increase in travel expenses due to the location of NWS Earle in relation to its subordinate commands (see table 1). Detailed cost estimates to establish the command were not documented for other potential sites. CINCLANTFLT’s Fact and Justification Sheet acknowledges that no monetary or manpower savings have been identified with relocating the Commander, Navy Region Northeast, to NWS Earle. Our analysis shows potential for the Navy’s one-time cost estimates to be understated. For example: CINCLANTFLT officials estimated it would cost approximately $75,000 to renovate office space to accommodate the commander and his/her staff. However, officials at NWS Earle stated that this renovation cost estimate could increase to as much as $130,000 if the decision were made to install central versus window air conditioning. While CINCLANTFLT estimated that travel expenses would increase by about $75,000 per year for travel to other subordinate commands, other information indicates this estimate may be understated. Officials at NSB New London, where the core staff for the new command are currently stationed, provided their analysis that suggested that these costs could increase by about $100,000 to $200,000 annually. We did not independently verify this analysis. However, establishing the command at NWS Earle will result in the command being located in the southern most area of the region, making it relatively less accessible to other installations in the region than from its current location at NSB New London or from Newport. For example, travel from NWS Earle to other areas of the region would require greater use of air travel than from NSB New London or Newport where cars and car pools are more readily used to reach other facilities. Figure 1 shows the approximate locations of Navy concentration areas in the northeast region. CINCLANTFLT’s Fact and Justification Sheet also does not reflect cost estimates for renovating the on-base housing at NWS Earle to accommodate the flag officer. According to NWS Earle officials, it would cost at least $20,000 to renovate the proposed admiral’s quarters to meet the Navy housing standards for flag officer quarters if the admiral chose to live on base. The Navy’s cost estimates do not include the civilian personnel payroll increase that will occur as a result of this move. Due to the location of NWS Earle, each civilian employee would be entitled to a salary increase to reflect the locality pay for that area. Based on the U.S. Office of Personnel Management 1998 General Schedule, locality pay rates are 9.76 percent and 9.13 percent, for NWS Earle and NSB New London, respectively. Locality pay rates for Newport are 5.4 percent by comparison. In examining mission and support requirements of the new command, we found that the NWS Earle location raises two basic operational limitations when compared to the current location at NSB New London or the facilities at Newport. These limitations relate to increased travel time and costs associated with operating from that location and the adequacy of existing facility infrastructure to support the new headquarters relative to at least the NSB New London and Newport locations. According to CINCLANTFLT’s Fact and Justification Sheet, the proposed mission of the Commander, Navy Region Northeast, would primarily involve management and oversight of the widely dispersed naval shore activities in the northeast region. CINCLANTFLT officials expect that travel expenses would increase over what they would be in a more central location. According to NSB New London officials, the mission requires frequent travel to and from the naval activities within the region (see fig. 1 and app. I). Because NWS Earle is located in the southern most part of the northeast region, these officials stated that there would likely be a greater reliance on travel by air than by car where several persons could travel together at less cost. Our review of factors such as office space, housing, and conference/training facilities at the sites we visited shows that NWS Earle has the least existing infrastructure to support the new command’s requirements. We observed that the available infrastructure at NWS Earle is primarily suited to support its mission of receiving, storing, and distributing naval ordnance and has limited office, conference, and classroom space. As stated previously, placing the new command at NWS Earle would require the displacing and relocating of existing command staff and renovating of other space to accommodate their relocation. Conversely, at NSB New London, the Navy would not incur any major renovation costs beyond the purchase and installation of additional office modular furniture to accommodate the increased number of staff. We observed that the current headquarters building for the regional coordinator staff at NSB New London has sufficient vacant space on the first and third floors to accommodate the proposed expansion. Even if the Navy decides that the Commander, Navy Region Northeast, and the Commander, Submarine Group Two, would not occupy the same building, officials at NSB New London identified four other buildings on base that could accommodate the Commander, Navy Region Northeast. We also found that the Navy facilities and infrastructure at Newport would be adequate to support the command without major renovation costs. Additionally, NWS Earle does not have sufficient officer housing quarters available to accommodate an admiral and additional staff officers. The proposed staffing of the new command includes 17 officers, including the commanding officer, whereas the on-base family housing at NWS Earle includes 38 officer housing units of which only 2 were vacant as of August 1998 because they were being renovated. Furthermore, according to officials at NWS Earle, none of these officer housing units meets the standards for a flag officer. Although renovations could be made to improve some officer housing units, officials at NWS Earle stated that it is more likely the admiral and his senior staff would choose to reside in quarters available to them at the Fort Monmouth Army Base, about 6 miles away. This latter option is already the housing of choice for some command staff officers currently stationed at NWS Earle. Conversely, at both NSB New London and Newport, there is sufficient housing space to accommodate the proposed command’s military staff. We observed that both of these bases have housing areas with sufficient space to accommodate both the numbers and grade levels of the command’s military staff. As part of the regional coordination mission involving management and oversight of naval shore activities in the region, the command hosts frequent conferences and training seminars for personnel from other naval installations throughout the region. For example, during fiscal year 1998, about 20 to 50 personnel at a time attended training courses and conferences at NSB New London that related to regional activities such as the Navy’s commercial activities program, casualty assistance calls, information technology, facilities engineering, family advocacy and family services, and regional security. Officials at NWS Earle stated that the command building there would not include adequate conference and training facilities to accommodate these activities. We observed, for example, that the current command building at NWS Earle that would be used to house the new command has one conference room, which has sufficient space for a maximum of about 15 to 20 participants. Conversely, we observed that the facilities occupied by the regional coordinator staff at NSB New London currently have several large conference rooms and several other smaller meeting facilities that are sufficient to accommodate expanded requirements. Similarly, we observed that the building at Newport that would be used for the new regional command has sufficient conference and meeting rooms to accommodate the command’s anticipated requirements. While the CINCLANTFLT justification was based primarily on NWS Earle’s proximity to New York City, the desire for a flag rank officer at that location, and several other public relations-related factors, the high priority given to these criteria appears questionable when compared to the command’s core mission responsibilities. CINCLANTFLT’s Fact and Justification Sheet states that (1) NWS Earle is the only primary homeport for Navy ships on the East Coast without a flag officer and (2) there is a need for Navy flag officer representation in the New York-New Jersey area to act as the resident Navy spokesperson and to interact on the Navy’s behalf with major corporations, labor unions, other organizations associated with maritime commerce, and publishing and media concerns. It also states that the regional commander would serve as the official Navy representative for major events, visiting foreign dignitaries, and U.S. Navy and foreign ship port visits. The regional commander would serve on numerous area special purpose councils and respond to requirements for support functions and services in the New York City area arising from the large population and the Navy’s recruiting efforts in the area. Furthermore, the justification sheet states that there is a requirement for essential support functions and services such as major casualty assistance calls programs, extensive regional public affairs information services, and a large community service program in the New York-New Jersey area. While each of the justification points highlighted in the justification sheet has merit, available data indicate that these functions differ significantly from the command’s core responsibilities. These core responsibilities are more related to managing installation support services at the Navy’s bases and commands in the region and other important functions highlighted in the command’s draft Mission, Functions and Tasks Statement, such as providing primary resource support, management control, and technical support of assigned shore activities. In addition, according to regional coordination officials at NSB New London, flag presence has been required in the New York City area only on an average of about once every 2 months. CINCLANTFLT officials stated that flag presence has been requested in the New York City area more often, but they were unable to provide documentation to quantify their position. Nevertheless, in terms of increased proximity to New York City, NWS Earle is approximately 1-1/2 hours away by automobile. NSB New London is about 2 hours from New York City by automobile and is more centrally located in the northeast region. Therefore, it is not clear that NWS Earle provides a geographic advantage over other locations. Officials at NSB New London stated that they are performing many of the functions proposed for the new command. In this regard, CINCLANTFLT officially designated the Commander, Submarine Group Two, at NSB New London as the Naval Northeast Regional Coordinator in 1994. Some of the regional functions that NSB New London staff have been performing consist of facilities management, regional environmental coordination, disaster preparedness, casualty assistance coordination, family advocacy programs, regional security, and coordination of regional port visits. Additionally, NSB New London staff have recently begun a number of regional projects, including public affairs office consolidation; housing studies; supply coalition; and a Joint Inter-service Regional Support Group, which encompasses support for military facilities in Connecticut, Rhode Island, and Massachusetts. The establishment of a separate Commander, Navy Region Northeast, will also expand the responsibilities of the regional coordinator to include, for example, managing the funds for the base operations support functions at the naval shore installations in the region. As previously noted, while the Navy has emphasized the establishment of a new command to oversee base support operations in the region, officials at NSB New London stated that they are currently responsible for many of the functions proposed for the new command. According to these officials, moving the command to NWS Earle could temporarily disrupt the core base operations functions already established if, as these officials suggest, many of the current employees choose not to relocate to NWS Earle. Moreover, we noted that by moving the new command away from NSB New London, the Navy would be separating the command from other regional activities currently located at NSB New London, including the Regional Supply Coalition and the Regional Emergency Command Center. We recognize that site selection decisions are ultimately a management prerogative based upon weighing relevant factors. At the same time, where policy guidance or other stipulated criteria are established to facilitate decision-making, we believe it is important for decisionmakers to ensure that such guidance and criteria are followed and documented to support the basis for their decisions. It is not clear, however, to what extent CINCLANTFLT’s site selection process was conducted in accordance with Navy guidance and other stipulated criteria regarding the current site selection recommendation. Further, the justification cited for recommending NWS Earle over the current location at NSB New London, or other locations, appears to have a number of weaknesses in the cost estimates that were made and consideration of nonmonetary benefits such as infrastructure deficiencies at NWS Earle and command travel time gains. We recommend that the Secretary of Defense require the Secretary of the Navy to review and more fully assess the prospective headquarters location for the Commander, Navy Region Northeast, against the Navy’s decision-making criteria, taking into consideration issues and questions raised in this report. In written comments on a draft of this report, the Navy concurred with our recommendation and stated that it will review and reconsider all pertinent facts, including the issues and questions raised in this report, and that CINCLANTFLT will then resubmit a fact and justification package on the establishment of a Northeast Region Commander. The Navy also stated that, CINCLANTFLT did follow its published guidance on establishment of shore activities. It also noted that, although cost is an important consideration, it is not the only factor evaluated in the decision-making process. We agree that cost is not the only factor. Our review of available documentation and discussions with Navy officials have indicated that the recommendation to select NWS Earle was based primarily on placing the command in closer proximity to New York City. Less attention was given to other fundamental factors such as operational effectiveness, costs, and core mission responsibilities. Our draft report raised questions about the extent to which the Navy had followed its own criteria regarding the establishment of shore activities since we could not be certain to what extent the Navy met its stipulated requirements because the Navy had limited documentation to support its analyses. We modified our report to clarify this issue. The full text of the Navy’s comments from the Office of the Chief of Naval Operations is presented in appendix II. To assess the process the Navy used for recommending a site for the Commander, Navy Region Northeast, we reviewed available cost estimate data gathered by staff within the office of the CINCLANTFLT. We did not, however, independently verify the Navy’s cost estimates. We also reviewed and analyzed CINCLANTFLT’s (1) Fact and Justification Sheet for the recommendation that the command relocate to NWS Earle, New Jersey; (2) facilities data gathered during the decision-making process; (3) Navy Instruction 5450.169D regarding the establishment of shore activities; (4) Instruction 5450.94 regarding the proposed mission, functions, and tasks statement for the Commander, Navy Region Northeast; and (5) other related documentation. We visited and interviewed officials at the Commander, Submarine Group Two, at the NSB New London in Groton, Connecticut, who are currently responsible for regional coordination among CINCLANTFLT activities in the northeast region. We compared the current mission and staffing of the regional coordination office to the proposed mission, functions, and tasks statement for the Commander, Navy Region Northeast. We discussed with these officials the facilities, infrastructure, and base support available to accommodate the new command. We also visited and interviewed officials at NWS Earle, New Jersey, and the naval base at Newport, Rhode Island, to determine how the command would be accommodated if relocated to these locations. We selected these bases for our review because NWS Earle is the base that CINCLANTFLT has recommended as the site for the Commander, Navy Region Northeast, and the naval facilities at Newport are centrally located within the northeast region. We discussed with these officials the facilities, infrastructure, and base support available to accommodate the new command. We met with senior CINCLANTFLT officials on several occasions to brief them on the results of our work. We have incorporated their comments, as appropriate, to enhance the technical accuracy and completeness of our report. We conducted our review from April to August 1998 in accordance with generally accepted government auditing standards. We are sending copies of this report to the Chairmen and Ranking Minority Members of the Senate Committees on Armed Services and on Appropriations and the House Committees on National Security and on Appropriations; the Director, Office of Management and Budget; and the Secretaries of Defense and the Navy. Copies will also be made available to others upon request. Please contact me on (202) 512-8412 if you or your staff have any questions concerning this report. Major contributors to this report are listed in appendix III. David A. Schmitt, Evaluator-in-Charge John R. Beauchamp, Evaluator Patricia F. Blowe, Evaluator The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed: (1) the Navy's site selection process for its northeast regional command; (2) the extent to which the Navy fully evaluated the costs and implications of establishing a new command at the Naval Weapons Station (NWS) Earle, New Jersey, versus the Naval Submarine Base (NSB) New London, Connecticut or the Naval Undersea Warfare Center located at Newport, Rhode Island; (3) the extent to which the Navy followed its criteria for establishing shore activities and the extent to which it fully analyzed prospective costs of the three sites; and (4) location and infrastructure factors that would affect costs and operations of the new command at each of the three locations. GAO noted that: (1) weaknesses exist in the Navy's process for selecting the location for the headquarters for its new northeast regional command; (2) in selecting NWS Earle, it is not clear to what extent the Navy followed its own criteria for the establishment, disestablishment, or modification of shore activities or fully assessed the comparative costs of establishing and operating the new headquarters at all sites it had indicated were under consideration; (3) the costs to establish the command at NWS Earle may be greater than the Navy estimated; (4) the NWS Earle site has some basic operational limitations compared with at least two other sites, including NSB New London and Newport; (5) these limitations relate to facilities' infrastructure to support the new command and increased travel time and costs associated with operating from NWS Earle; (6) the Navy stated that it needs a flag rank command closer to New York City to attain certain operational benefits; and (7) while this need may be appropriate, questions exist about: (a) how often the need to visit New York City arises; (b) whether the NWS Earle location provides a significant reduction in travel time compared with travel from the current location at NSB New London; and (c) whether it is desirable to separate the new command from other centralized support activities located at NSB New London.
This section describes the following general aspects of EPA’s management of discretionary grants: (1) types of grants awarded by EPA; (2) EPA’s competition policy and grants management plan; (3) new discretionary grant awards; and (4) amendments to discretionary grant awards. EPA generally awards three types of grants that are authorized by statutes and regulations. Formula grants. EPA awards formula grants noncompetitively to states in amounts based on formulas prescribed by law to support water infrastructure projects, among other things. For example, EPA awards formula grants from the Clean Water and Drinking Water State Revolving Funds to support water treatment facility construction and improvements to drinking water systems, such as pipelines and drinking water filtration plants. Categorical grants. EPA generally awards categorical grants—which it also refers to as continuing environmental program grants— noncompetitively, mostly to states and Indian tribes to operate environmental programs that they are authorized by statute to implement. For example, under the Clean Water Act, states and tribes can establish and operate programs for the prevention and control of surface water and groundwater pollution. EPA determines the amount each grantee receives for a categorical grant on the basis of agency- developed formulas or program-specific factors. Discretionary grants. EPA awards discretionary grants— competitively or noncompetitively—to eligible applicants for specific projects, with program and regional offices selecting grantees and determining dollar amounts. Also, for some discretionary grants, EPA negotiates work plans, which include estimated time frames and dollar amounts for activities under the grant. EPA awards these grants for a variety of activities, such as environmental research, training, providing education programs, and cleaning up brownfields. The respective grant programs under each program and regional office generally have varied focuses. According to OGD officials, EPA has historically held roughly 100 to 125 discretionary grant competitions annually. Appendix III lists EPA’s 67 active discretionary grant programs, including the program or regional office responsible for managing each one. EPA’s competition policy establishes parameters for the competition of discretionary grants. The competition policy states that it is EPA policy to promote competition to the maximum extent practicable in the award of grants. Further, it states that EPA policy requires that the competitive process be fair and impartial, that all applicants be evaluated only on the criteria stated in the grant announcement, and that no applicant receive an unfair competitive advantage. In 2002, EPA developed its first competition policy, which created OGD’s Grants Competition Advocate, who is the senior official responsible for administering and overseeing implementation of, and compliance with, the policy and issuing guidance for its implementation. EPA made substantial revisions to the policy in 2005, including by establishing detailed justifications for awarding grants noncompetitively, and has continued to periodically update and revise it, as necessary. In 2003, EPA issued its first grants management plan, which included goals such as strengthening grants oversight and promoting competition in the award of grants. The plan established a variety of performance targets, including competitively awarding at least 85 percent of new awards subject to EPA’s competition policy annually by 2005. In a 2009 update to the plan, EPA modified the target to competitively award at least 90 percent of new awards or dollars subject to the competition policy annually, and this performance measure remained in the 2016 plan update. EPA reports progress on this target annually in its agency financial report. The management of EPA discretionary grants is subject—as are all stages of the federal grants life cycle—to a range of requirements derived from a combination of Office of Management and Budget (OMB) guidance, agency regulations, and program-specific statutes. OMB is responsible for developing government-wide policies to ensure that grants are managed properly. Until recently, OMB’s policies were published as guidance in various circulars that grant-making agencies would adopt into their own regulations. In December 2013, OMB consolidated its grants management circulars into a single document, Uniform Administrative Requirements, Cost Principles, and Audit Requirements for Federal Awards (known as the Uniform Guidance), to streamline its guidance, promote consistency among grantees, and reduce administrative burden on nonfederal entities. In December 2014, along with EPA and other federal grant-making agencies, OMB issued a joint interim final rule implementing the Uniform Guidance for new awards made on or after December 26, 2014. Under its competition policy, EPA generally awards new discretionary grants competitively in two ways. Open competition. Open competitions are available to all potentially eligible applicants identified in the Catalog of Federal Domestic Assistance (CFDA) description for a particular grant program. According to the competition policy, open competition is EPA’s preferred method of competition and is required when the estimated total amount of awards under a competition—regardless of the amount of any individual awards—exceeds $100,000, unless the grant is an exception to or an exemption from competition (discussed below). Simplified competition. Simplified competitions are available to a subset of the potentially eligible applicants identified in the CFDA description, as long as EPA determines that they are capable and qualified to successfully perform the project. Simplified competition may only be used when the CFDA description indicates that EPA may limit eligibility to compete to a number or subset of eligible applicants. The competition policy states that, when the estimated total amount expected to be awarded does not exceed $100,000, open competition is preferred, but simplified competition is permitted. According to the competition policy, simplified competition is intended to reduce administrative costs, promote efficiency in competitions, and minimize burdens for program and regional offices and applicants in conducting and competing for grants for which a limited amount of funding is available. Under its competition policy, EPA may award new discretionary grants noncompetitively as exceptions to competition under any one of the following circumstances: when an award is $25,000 or less; when a program or regional office demonstrates that there is only one responsible source that has the capability to successfully perform a project because of such reasons as possessing proprietary data or unique or specialized equipment or facilities; when an award cannot be delayed because of unusual and compelling urgency or the interests of national security; when an award is to fund an unsolicited proposal that is unique or innovative, has been independently originated and developed by the applicant, was prepared without government direction or involvement, and does not resemble the substance of a pending or contemplated competitive grant; or when EPA determines that competition is not in the public interest. Many EPA grant programs are exempt from competition and, thus, are not subject to the competition policy. Exemptions from competition are made for the following groups of grant programs, which include some discretionary grants: grants to states, interstate agencies, local agencies, Indian tribes, intertribal consortia, and other eligible grantees under a variety of programs, including the Leaking Underground Storage Tank Trust Fund Cooperative Agreements, Oil Spill Trust Fund grants, and awards under any program that has a statutory or regulatory allotment or allocation funding formula; other programs available by statute, appropriation act, or regulation only to Indian tribes and intertribal consortia; grants required or authorized by law, executive order, or international agreement to be made to an identified grantee(s) in order to perform a specific project, and congressional earmarks to an identified grantee(s) to the extent consistent with any applicable executive orders and any other government-wide laws or guidance relating to earmarks; Senior Environmental Employment Program Cooperative grants to foreign governments and to United Nations agencies and similar international organizations for international environmental activities; and other programs if approved by the Assistant Administrator for the Office of Administration and Resources Management. Appendix III provides information on the EPA discretionary grant programs that have an exemption from competition. EPA’s competition policy describes provisions for making amendments to discretionary grants depending in part on whether the grant was disbursed over time and also whether it was subject to the competition policy. EPA generally makes four types of amendments to discretionary grants. No-cost amendments. No-cost amendments are for time extensions or to authorize spending unexpended funds on additional activities within the scope of the original grant. No-cost amendments do not provide additional dollars and are not required to be awarded competitively. Incremental amendments. Incremental amendments are for funding a grant over time, instead of funding the grant in a one-time lump sum. Incremental amendments are not required to be awarded competitively, as long as the work is within the scope of the original grant. Incremental amendments can only be funded up to the approved amount of the original grant, which may or may not have been awarded competitively. Supplemental amendments. Supplemental amendments are for additional dollars for unanticipated cost increases or for added work to grants awarded competitively or as exceptions to competition. Supplemental amendments for unanticipated cost increases are not required to be awarded competitively if they do not involve added work. Supplemental amendments for added work up to $25,000 (in the aggregate per grant) are not required to be awarded competitively if the work is within the scope of the original grant. Supplemental amendments for added work exceeding $25,000 (in the aggregate per grant) must be awarded competitively, unless the program or regional office demonstrates that the work is within scope of the original grant and only the grantee can perform it in a cost-effective manner. Amendments to exempt awards. Amendments to exempt awards are for any amendment to a grant awarded under an exemption from competition. While amendments to exempt awards may serve purposes similar to those of incremental and supplemental amendments, they constitute a separate category of amendments. Like the original award, amendments to exempt awards are not subject to the competition policy. EPA manages competition for its discretionary grants through a process established by its competition policy and implemented by its program and regional offices, which fund activities related to their own programmatic focuses. Under the competition policy, program and regional offices are to advertise discretionary grant opportunities through announcements on Grants.gov and other methods as appropriate; evaluate all applications against eligibility criteria and all eligible applications against evaluation criteria; and award grants. Under the competition policy, the Grants Competition Advocate’s Office in OGD is responsible for providing ongoing guidance and oversight for program and regional offices. Under EPA’s competition policy, program and regional offices are to advertise open competition opportunities on both EPA’s website and Grants.gov. The competition policy allows program and regional offices to also advertise discretionary grant opportunities using other methods reasonably calculated to ensure the notification of all potentially eligible applicants, including newsletters, trade journals, newspapers, and email lists. While open discretionary grant opportunities can be found on both Grants.gov and EPA’s website, initial applications for competitively awarded discretionary grants must be submitted using Grants.gov, according to EPA policy. Eligible applicants may apply for and receive multiple discretionary grant awards unless prohibited by the authorizing law for a particular grant or the terms of a particular grant opportunity. According to the competition policy, simplified competition opportunities are not advertised on Grants.gov and, instead, must be issued directly to the competing applicants by the relevant program or regional office. According to the competition policy, if one award is expected, the simplified competition opportunity must be issued to at least three eligible organizations. If multiple awards are expected, simplified competition opportunities must be issued to at least twice as many eligible organizations as are expected to receive awards. Any organization expressing an interest must be allowed to participate in a simplified competition opportunity. Program and regional offices must document how they determine the field of competing applicants and, if conducting multiple simplified competitions, must vary the field of competing applicants for each opportunity. The competition policy states that program and regional offices are responsible for preparing all announcements for open and simplified competition opportunities in accordance with the Uniform Guidance, other OMB guidance, and guidance from the Grants Competition Advocate. According to the competition policy and the Uniform Guidance, all announcements must include the following eight sections: 1. funding opportunity description, including the programmatic and technical description with authorizing statutes and regulations and clear examples of eligible activities; 2. award information, including information about the expected number of awards and award amounts; 3. eligibility information, including information identifying the applicants eligible to compete for awards and specific eligibility criteria; 4. application and submission information, including a description of the required content and format of the application and instructions on how to apply; 5. application review information, including specific ranking and evaluation criteria and the relative importance assigned to them, such as relative points, weights, percentages, or other means used to distinguish them; 6. award administration information, including notice to applicants of EPA’s disputes procedures and other pertinent administrative information; 7. agency contacts, including a point of contact for answering questions about the announcement; and 8. other information, including any additional information that may be helpful to applicants. Under EPA’s competition policy, program and regional offices are to use an objective and unbiased process for reviewing competitive discretionary grant applications and selecting applicants for awards. According to the competition policy, this process requires a comprehensive, impartial, and objective examination of applications based on criteria in the announcement by persons who do not have conflicts of interest and who are knowledgeable in the field for which awards are being made. To achieve such an examination, the competition policy established a two- step process to evaluate competitive discretionary grant applications: (1) review and assess all applications against eligibility criteria, and (2) review and assess eligible applications for technical merit against evaluation criteria. All reviewers must sign conflict of interest statements. The competition policy states that applications must typically meet eligibility criteria before being considered eligible and reviewed for merit under the evaluation criteria. Eligibility criteria must be specified in the announcement and, according to EPA documents, typically include whether the applicant meets criteria specified in a grant program’s authorizing statutes or regulations and the CFDA description. According to EPA documents, eligibility criteria also typically include whether the application addresses program priorities, requests an allowed amount, complies with instructions, meets geographical restrictions, and is submitted on time. According to OGD officials, these eligibility criteria are largely yes/no determinations. Eligible applications are to be reviewed for technical merit against an announcement’s evaluation criteria. These criteria vary by competition opportunity but typically include project activities and methods, past performance, and environmental results, according to EPA documents. According to the competition policy, the evaluation criteria must be tailored to the nature of the projects being awarded competitively, represent key areas of importance and emphasis to be considered in the selection process, and support meaningful and fair comparisons of competing applicants. To ensure that applications are fairly and objectively assessed against the evaluation criteria, program and regional offices must use a scoring method that assigns numerical weights or points, descriptive ratings (e.g., acceptable, good, outstanding), a low- medium-high rating system, or something similar to each of the evaluation criteria, which may then be used to determine a total, average, or consensus score for each application. Evaluation criteria reviewers comprise a review panel, and each reviewer must complete a scoresheet and include comments explaining reasons for the score assigned. EPA divides responsibility for awarding discretionary grants among different officials—a selection, approval, and award official—to ensure independence and provide checks and balances. Following the process for evaluating applications, the review panel provides the selection official a list of eligible applications ranked according to their scores. The selection official then makes a funding recommendation—multiple applications may be recommended for funding—that is based on the scores assigned and other factors, as allowed under the terms of the announcement. According to OGD officials, the selection official’s primary responsibility is to assure that the applications selected for award are for eligible projects with technical merit, based on the terms of the announcement. According to the competition policy, if the selection official selects an application out of the ranked order, the program or regional office must document the basis for that decision. The competition policy states that the selection official cannot depart from the rankings of the review panel on the basis of undisclosed selection criteria, personal preference, or information that is not reasonably related to the evaluation factors in the announcement. The approval official, a senior manager in the respective program or regional office, is responsible for signing the funding recommendation and may be the same person as the selection official. The competition policy directs the selection official to prepare a selection rationale document—to be included in or attached to the funding recommendation—that includes a summary of the competition, a discussion of how the recommended applications ranked in comparison with other applications, and an explanation of why the applications were selected to receive an award. According to EPA documents, after reviewing the funding recommendation, the award official has the authority to obligate funds and make awards. Depending on the office, the officials serving in the capacity of the selection and award officials can vary; however, the selection official generally has subject-matter knowledge of a particular grant program, and the award official generally has knowledge of grants management. According to EPA documents we reviewed and officials we spoke with, before an award is made, the program or regional office conducts a final review that includes verification of applicants’ eligibility and assurance that all award requirements are met. Once the award official obligates the funds, all awards enter a 5-day congressional waiting period, during which EPA notifies the applicants’ respective congressional delegations so they have an opportunity to track the awards, according to OGD officials. Awards of $1 million or more include an additional 5-day White House notification before the funds are obligated. Following the waiting periods, EPA sends an award agreement to the grantee electronically, at which time the grantee can begin using the funds. The competition policy includes procedures for providing applicants timely feedback about the process for evaluating applications and awarding grants. These procedures are also aimed at providing an efficient, effective, and meaningful dispute resolution process for certain competition determinations. The policy states that disputes and disagreements must be resolved at the lowest level possible, and it establishes three key opportunities to do so. Notification. Within 15 days of an ineligibility determination or a negative selection decision, program and regional offices must provide applicants with a written explanation of why they were either determined ineligible or not selected. The notification must indicate that applicants may request a debriefing on the basis for these determinations. Debriefing. Debriefings may be oral (e.g., face-to-face or by telephone) or in writing, although the competition policy states that oral debriefings are strongly preferred because they provide a better opportunity to resolve issues quickly. During debriefings, program and regional offices may answer questions and provide applicants with information on the strengths and weaknesses of their applications and the basis for their scores. All debriefings must be conducted promptly so that applicants have an opportunity to either re-enter the competition if they successfully challenge the determination during the debriefing or file a written dispute. Filing a dispute. After receiving a debriefing, applicants may file a written dispute with a designated Grants Competition Disputes Decision Official, who cannot be involved in the competition process and must be from outside the program office conducting the competition. Disputes are required to be considered only when they challenge a determination that the application (1) is ineligible based on the applicable statute, regulation, or announcement requirements or (2) did not meet eligibility criteria in the announcement. After consulting with the Grants Competition Advocate and with the concurrence of EPA’s Office of General Counsel or regional counsel, as appropriate, the Grants Competition Disputes Decision Official is to issue a written decision on the dispute, which constitutes the final agency action. Program and regional offices implement EPA’s process for advertising, evaluating, and awarding discretionary grants according to the unique circumstances of each grant program. While the competition policy states that open grant opportunity announcements must be advertised on Grants.gov and EPA’s website and must include key information required by OMB guidance and EPA policy, such as expected award amounts and eligibility and evaluation criteria, the actual content of these announcements is the responsibility of the respective program or regional office that issues them, according to OGD officials. In addition to advertising grant opportunities on Grants.gov and EPA’s website, program and regional offices also advertise some open opportunities using supplemental methods. OGD officials stated that program and regional offices have discretion over whether to use other supplemental methods to advertise grant opportunities. For example, 7 of the 12 active grant opportunities available on Grants.gov on April 27, 2016, and prepared by nine different program and regional offices were advertised using supplemental methods, according to OGD officials. Three of these were advertised via a combination of webinars, press releases, and other regional outreach; three others via a listserv in combination with Twitter, email groups, or newsletter notifications; and the seventh via an email announcing the opportunity to existing grantees. According to the competition policy, program and regional offices may identify applicants for simplified competition on the basis of prior history and experience with the applicant or expressions of interest by potentially eligible applicants. OGD officials told us that program and regional offices rarely use simplified competitions because EPA’s preferred method is open competition and the administrative work in preparing an announcement for simplified competition is comparable to that for open competitions. However, OGD officials said the review process for simplified competitions may be shorter than that for open competitions because it involves fewer applications to review. Exceptions to competition are not advertised, and program and regional offices are responsible for determining whether to make exceptions to competition on an award-by-award basis. The competition policy states that program and regional offices must provide written justification for exceptions to competition, except those for $25,000 or less, and that the justification must contain sufficient facts and rationale, including statutory or regulatory authority for the award. Depending on the type of exception, the justification is to be approved in writing by the lead agency official responsible for a particular grant, or the lead agency official’s designee. According to the competition policy, the Grants Competition Advocate is responsible for approving justifications for several, but not all, types of exceptions. Program and regional offices may customize EPA’s two-step process for evaluating applications. The competition policy directs program and regional offices, in most cases, to establish a panel of reviewers for evaluating applications and specifies that reviewers must independently review applications in accordance with the criteria stated in the announcement. OGD officials stated that program and regional offices typically have one person perform the eligibility review, in consultation with EPA legal staff if necessary, and a panel of different people reviews applications against the evaluation criteria. These officials said that the eligibility reviewer may be someone from the program or regional office but that evaluation panels are usually composed of technical and subject- matter experts who are typically EPA staff, although some programs may use other federal or nonfederal reviewers. According to OGD guidance, program and regional offices are to prepare reviewer instructions and brief reviewers on their responsibilities, including providing guidance on the scoring process so that all reviewers are operating under a common framework. OGD officials told us that program and regional offices have flexibility to design the scoring approach for evaluation criteria they believe is best suited for their competition opportunity. They stated that, although most offices use a weighted 100-point scale, some use other approaches. For example, ORD uses an external peer-review process to evaluate eligible applications, and the scores are based on descriptive ratings (e.g., poor, fair, good, very good, excellent), which are then used to determine applications to forward to an internal EPA review panel. According to EPA documents, the agency occasionally receives a group of applications that all receive low scores; in these cases, EPA may not make any awards because, according to OGD officials, there are no proposals worth funding. In May 2006, we reported that, before 2002 EPA did not extensively award grants competitively or provide widespread notification of upcoming grant opportunities. We further reported that the 2002 competition policy represented a major cultural shift for EPA managers and staff, requiring EPA staff to take a more planned, rigorous approach to awarding grants. OGD officials told us that creating and implementing the agency’s competition policy in 2002, continuing to update the policy, and creating the Grants Competition Advocate were several steps taken to improve EPA’s grants competition process in response to past congressional reviews and assessments of the process by OMB, EPA’s Office of Inspector General, and us. According to these officials and EPA documents, other steps included developing EPA’s competition performance targets and substantially revising the competition policy in 2005, for example by imposing more rigorous review for exceptions to competition and enhancing the necessary documentation staff had to submit. The competition policy allowed for the establishment of the Grants Competition Advocate, the senior official responsible for interpreting and administering the competition policy and for providing ongoing guidance and oversight for program and regional offices. The Grants Competition Advocate oversees a small staff, and together they comprise the Grants Competition Advocate’s Office. According to OGD officials and information on EPA’s website, the Grants Competition Advocate’s Office administers and oversees the competition policy and provides advice and support to program and regional offices on matters related to awarding grants competitively. OGD officials told us that creating the Grants Competition Advocate as a senior-level position was the agency’s key action taken under the original competition policy, among several steps taken, to improve EPA’s grants competition process in response to past reports and reviews. The competition policy states that program and regional offices are responsible for complying with guidance issued by the Grants Competition Advocate. Among other things, such guidance directs program and regional offices to document that individuals involved in the competition, evaluation, and selection of grants do not have any conflicts of interest; use exceptions to and exemptions from competition only under proper and appropriate circumstances and prepare adequate and defensible justifications for noncompetitive awards, many of which must be reviewed and approved by the Grants Competition Advocate; ensure that funding recommendations and award decisions contain selection justification documents required by the competition policy; and provide the Grants Competition Advocate with information, as requested, pertaining to competitions conducted. According to EPA documents we reviewed and OGD officials we spoke with, the Grants Competition Advocate’s Office provides support to program and regional offices in several ways. Training and guidance. The competition policy directs the Grants Competition Advocate to coordinate training to help program and regional offices implement the policy and make recommendations and take actions necessary to maintain, facilitate, promote, and enhance the policy, such as by providing guidance. For example, according to OGD officials, the Grants Competition Advocate’s Office provides ongoing guidance for program and regional offices via training, intranet sites, group emails, and in-person consultations. This guidance includes an intranet checklist for preparing announcements that meet the competition policy and Uniform Guidance. According to OGD officials, EPA’s competitive discretionary grant announcements have become more consistent, reliable, and of better quality in recent years as program and regional offices have become more familiar with the guidance, including the checklist, and begun consulting the Grants Competition Advocate’s Office as they prepare new announcements. Announcement reviews. According to EPA documents and OGD officials and in accordance with the competition policy, the Grants Competition Advocate and Office of General Counsel review and concur on all announcements for $1.5 million or more before they are posted to ensure compliance with requirements and for quality control. In addition, every year, according to OGD officials, program and regional offices send the Grants Competition Advocate’s Office about 10 to 12 justifications for exceptions to competition for review or approval. Further according to EPA, depending on workload and other considerations, the Grants Competition Advocate’s Office and agency attorneys review many announcements under $1.5 million. OGD officials stated that most EPA competitive discretionary grant announcements, and nearly every justification request for exceptions to competition, are reviewed to some extent by the Grants Competition Advocate’s Office or agency attorneys before they are made available to the public or finalized. Effectiveness reviews. The Grants Competition Advocate’s Office conducts annual competition effectiveness reviews of a small sample of discretionary grant competitions to ensure that they were conducted in accordance with the competition policy, according to OGD officials. The office selects a single competition opportunity for review from every office that conducts competitions, alternating annually between headquarters and regional offices. According to officials from the Grants Competition Advocate’s Office, the main methodology for making each selection is to pick a competition opportunity where awards have been made and that has not been reviewed recently. The officials stated that they also try to avoid picking competition opportunities with the same subject matter in consecutive years, but that this can be challenging when selecting competition opportunities from the regional offices because they generally offer fewer competition opportunities than headquarters offices. In its competition effectiveness reviews from fiscal years 2013 through 2015, EPA found that the competitions were generally being conducted in accordance with the competition policy and that most offices had made improvements, such as in ensuring reviewers documented their evaluations properly. EPA also made several recommendations in these reviews, such as that offices confirm all reviewers sign conflict-of-interest statements and that review panel chairs advise reviewers to provide detailed comments justifying their scores. The Grants Competition Advocate’s Office also provides support to applicants, according to EPA documents and OGD officials. OGD offers a website on understanding, managing, and applying for EPA grants that includes various applicant resources, such as guidance and training, including a tutorial on applying for grants. In addition, according to OGD officials, the Grants Competition Advocate’s Office conducts webinars quarterly and posts them online to explain EPA’s grant competition process and to answer questions from the public. OGD also offers an annual forecast to highlight competition opportunities of interest to certain community-based organizations, such as small organizations, according to EPA documents and officials. EPA generally followed its process for advertising grant opportunities for the 12 announcements we reviewed and for evaluating and selecting applications to fund for the 5 discretionary grant competition opportunities we reviewed. To assess how EPA has advertised grant opportunities, we selected all of the 12 active EPA grant announcements, prepared by nine different program and regional offices, that were available on Grants.gov on April 27, 2016, and checked the extent to which these announcements included elements that the competition policy and OGD’s checklist for preparing announcements direct them to include. In general, we found that the majority of the elements were included in each announcement, with a few discrepancies and minor errors, mostly involving elements being located in the wrong place in the announcement. To assess how EPA has evaluated and selected applications to fund, we reviewed internal documentation for the eligibility and evaluation criteria reviews for a nongeneralizable sample of five discretionary grant competition opportunities—two opportunities managed by ORD and three opportunities managed by the Region 9 Office. Our review found complete documentation for key steps, including signed conflict-of- interest statements, reviewer instructions, eligibility reviews, reviewer scoresheets, and reviewer comments. In addition, the funding recommendations for each competition opportunity included such key information as a summary of the competition, a discussion of application rankings, and an explanation of why applications were selected for funding. In addition, to assess the prevalence of formal disputes over determinations resulting from EPA’s process, we reviewed the Grants Competition Advocate Office’s dispute decision matrix, which includes summary information on all formal disputes. Overall, EPA has received relatively few formal disputes over how its program and regional offices have conducted grant competitions, from May 2004 to March 2016. According to OGD officials and our review of the matrix, of the thousands of applicants who submitted applications during this period, 61 filed formal disputes over eligibility or evaluation determinations; 10 of these disputes were sustained, at least in part. Over this period, most of the program and regional offices that conduct competitions and award grants had received at least one formal dispute. According to OGD officials, EPA receives few disputes in part because program and regional offices take steps to explain EPA decisions during debriefings and resolve applicants’ issues before they ever reach the formal dispute phase. From fiscal years 2013 through 2015, EPA provided nearly $1.5 billion in discretionary grant dollars to about 2,000 unique grantees, including state governments, nonprofits, Indian tribes, state universities, and local governments, according to our analysis of EPA data. Of this total, $579 million was for new awards subject to the competition policy, and according to EPA, the agency met its annual performance target to competitively award at least 90 percent of these dollars or awards. EPA’s available information shows that the number of applications for discretionary grants fluctuates widely by competition opportunity. From fiscal years 2013 through 2015, EPA provided nearly $1.5 billion in discretionary grant dollars to a variety of grantees, including grantees in all 50 states, according to our analysis of EPA data. State governments received the largest amount (28 percent), with nonprofit organizations (18 percent), Indian tribes (14 percent), state universities (13 percent), and municipal governments (11 percent) also receiving substantial amounts of discretionary grant dollars. Figure 1 shows the percentages of EPA discretionary grant dollars awarded, by type of grantee, from fiscal years 2013 through 2015. Examples of discretionary grant awards include approximately $1 million in fiscal year 2015 to the state of Ohio to support the Great Lakes Water Restoration Initiative and the Great Lakes Water Quality Agreement; approximately $6 million in fiscal year 2013 to the National Fish and Wildlife Foundation to develop and implement the Chesapeake Bay Innovative Nutrient and Sediment Reduction Program; and approximately $4 million in fiscal year 2013 to the Northwest Indian Fisheries Commission in Olympia, Washington, to develop a program to manage funding for projects to protect and restore Puget Sound. Table 1 shows amounts of EPA discretionary grant dollars awarded, by type of grantee, from fiscal years 2013 through 2015. According to our analysis of EPA data, EPA made discretionary grant awards—both new awards and amendments—to about 2,000 unique grantees from fiscal years 2013 through 2015. Of these, EPA made new discretionary grant awards to about 1,700 unique grantees from fiscal years 2013 through 2015, and about 480, or about 28 percent, of these grantees received more than one new award during this period. Table 2 shows the combined number of all new awards and amendments, by type of grantee, from fiscal years 2013 through 2015. According to our analysis of EPA data, of the nearly $1.5 billion in discretionary grant dollars EPA awarded from fiscal years 2013 through 2015, approximately $579 million was for new awards subject to the competition policy: approximately $563 million was awarded by open competition, nearly $1 million was awarded by simplified competition, and over $14 million was awarded as exceptions to competition. According to EPA documents, the agency met its performance target by competitively awarding at least 90 percent of these new awards annually, by both dollar amount and number of awards. For example, according to our analysis of EPA data, in fiscal year 2015, about 95 percent of the discretionary grant dollars for new awards subject to the competition policy were awarded by open or simplified competition. Table 3 shows amounts of EPA discretionary grant dollars for new awards subject to the competition policy, by type of competition, from fiscal years 2013 through 2015. As shown in table 4, from fiscal years 2013 through 2015, state universities received the largest amount, almost $119 million, or 21 percent, of the approximately $563 million awarded by open competition, according to our analysis of EPA data. Nonprofits received the largest amount, about $590,000, or 60 percent, of the nearly $1 million awarded by simplified competition and almost $13 million, or 87 percent, of the over $14 million awarded as exceptions to competition. Examples of awards include $196,300 by simplified competition in fiscal year 2013 to the National Ground Water Association to provide training, technical assistance, outreach, and informational materials to owners of private wells nationwide to reduce risks to private well water supplies and groundwater and $5 million as an exception to competition in fiscal year 2015 to the Health Effects Institute to support research on the health effects of emissions from motor vehicles, fuels, and other sources of environmental pollution. Table 4 shows the amounts of EPA discretionary grant dollars for new awards subject to the competition policy, by type of grantee and type of competition, from fiscal years 2013 through 2015. Appendix IV provides additional information about new awards subject to the competition policy by fiscal year. OGD officials told us that only new discretionary grant awards—and not amendments to discretionary grant awards—count toward meeting the grants management plan’s performance target of competitively awarding at least 90 percent of the dollars or new awards subject to the competition policy annually. They stated that, while many amendments are subject to the competition policy—i.e., they are supplemental or incremental amendments to awards that are subject to the competition policy—it would be misleading to count these amendments toward the performance target because doing so could give the impression that EPA had competitively awarded more grants than it did. This is because an amendment is not a new award, but rather part of an existing award that would already have been counted toward meeting the performance target in the year it was awarded. Further, OGD officials told us they count only the dollars for the first year of awards disbursed over time because, although competitively awarded, the out-year dollars—i.e., incremental amendments—might not eventually be provided to a grantee for a variety of reasons, such as poor performance by the grantee. Therefore, counting incremental amendments toward the performance target at the issuance of the initial award could incorrectly indicate that EPA had competitively awarded more grant dollars than it might ultimately award. According to our analysis of EPA data, of the nearly $1.5 billion in discretionary grant dollars EPA provided from fiscal years 2013 through 2015, over $920 million was not subject to the competition policy or was not for new awards. More specifically, approximately $282 million was for exemptions from competition, which are new awards that are not subject to the competition policy, and about $632 million was for amendments to awards that may or may not have been subject to the competition policy. OGD officials told us that nearly all amendments to awards subject to the competition policy do not need to be awarded competitively because they meet certain conditions in the policy, such as being for work within the scope of the grant. If a proposed amendment must be awarded competitively because, for example, it is outside the scope of the grant, the officials stated that it should instead be processed as a new award. Table 5 shows the amounts of EPA discretionary grant dollars for exemptions, amendments, and other awards not subject to the competition policy, from fiscal years 2013 through 2015. From fiscal years 2013 through 2015, state governments received the largest amount, about 38 percent, of the approximately $282 million in discretionary grants awarded as exemptions from competition, according to our analysis of EPA data. An example of an award made as an exemption from competition includes approximately $7 million in fiscal year 2015 to the Alaska Department of Environmental Conservation to support wastewater projects in rural communities and Alaska Native villages. The exemption was for a grant program required by law to be made to an identified grantee in order to perform a specific project. Table 6 shows the amounts of EPA discretionary grant dollars for exemptions from competition, by type of grantee, from fiscal years 2013 through 2015. Appendix IV provides additional information about exemptions from competition by fiscal year. As shown in table 7, from fiscal years 2013 through 2015, state governments received the largest amounts, 41 percent and 40 percent respectively, of both the approximately $288 million in discretionary grants awarded as amendments to exempt awards and the approximately $12.6 million in discretionary grants awarded as supplemental amendments, according to our analysis of EPA data. An example of an amendment to an exempt award includes approximately $500,000 in fiscal year 2013 to the state of North Carolina to restore and maintain the Albemarle-Pamlico estuarine system. Examples of supplemental amendments include $120,000 in fiscal year 2015 to the Association of Clean Water Administrators for water quality improvement programs; $160,000 in fiscal year 2014 to the New York Department of Environmental Conservation for mapping aquatic vegetation and creating a long-term conservation strategy for Niagara River areas of concern; and $138,000 in fiscal year 2013 to the Osage Nation in Oklahoma to conduct well testing and inspecting, enforcement and compliance, and permitting of injection wells. Nonprofits received the largest amount, about 30 percent, of the approximately $258 million in discretionary grants awarded as incremental amendments, according to our analysis of EPA data. Table 7 shows the amounts of EPA discretionary grant dollars for different types of amendments, by type of grantee, from fiscal years 2013 through 2015. Appendix IV provides additional information about amendments by fiscal year. OGD officials stated that they do not track, and thus have no data on, whether an amendment to an exempt award is for additional dollars or other purposes, such as to award dollars incrementally, because these amendments are not subject to the competition policy or performance target. Therefore, to gather such information, officials stated that they would have to manually examine every amendment to an exempt award on a case-by-case basis to determine the reasons for the amendment. EPA posts limited information on the overall number of applications submitted for its discretionary grant competition opportunities; however, EPA’s available information indicates that, of thousands of applications received annually, the number of applications fluctuates widely on an opportunity-by-opportunity basis. OGD officials stated that, while EPA has historically conducted roughly 100 to 125 grant competitions annually, the number of competitions varies each year because not every grant program offers a competition opportunity every year. The officials stated that there have been fewer annual competition opportunities in recent years because program and regional offices are offering more multiyear competitions. As of May 11, 2016, according to EPA’s unofficial reports for open competitions completed from fiscal years 2013 through 2015, about 47 percent of 142 competition opportunities received more than 20 applications, about 18 percent received 11 to 20 applications, and about 17 percent received 4 to 10 applications. Approximately 18 percent of the competition opportunities received 3 or fewer applications, including about 8 percent that received 1 application. About half of the competition opportunities receiving one application were for the Region 3 Office’s Chesapeake Bay Programs. Region 3 officials said a potential reason for receiving so few applications could be that the opportunities’ evaluation criteria are highly specialized with few eligible applicants capable of doing the work. They stated that, in addition to advertising these opportunities on Grants.gov, they also advertise them on a listserv and a website for Chesapeake Bay issues; an email distribution list; targeted emails to local universities, colleges, nonprofit organizations, and state and local governments; and a hard copy mailing list with about 1,000 subscribers. OGD officials told us that, although thousands of potential applicants could be eligible for any particular competitive discretionary grant opportunity, the officials have no way of knowing why eligible entities may choose not to apply for an opportunity and that this is something EPA cannot control. They said receiving few applications could be the result of many different reasons for potentially eligible applicants, such as the location or timing of a project, available resources or expertise, and award amounts. OGD officials stated that, if a particular grant program has a pattern of receiving only one quality application across several competition opportunities, the Grants Competition Advocate’s Office may advise the program or regional office to consider requesting an exemption to competition for future awards to more efficiently use resources. EPA provides various kinds of information on grants, including discretionary grants, on four federal websites, each of which makes information publicly available for a different purpose. However, the information on EPA discretionary grants—including opportunities available and grant amounts awarded—on these websites is either difficult to identify or incomplete. In addition, EPA’s internal grants management system does not identify all discretionary grants, making it difficult for EPA to provide complete information to publicly available websites and internal and external decision makers. EPA provides some key information about grants, including discretionary grants, on four federal websites, each of which makes information about grants publicly available for a different purpose. Three of these websites are government-wide. CFDA.gov: The purpose of this website is to provide a compendium of federal grant programs. Information provided by EPA includes a grant program’s objectives, eligibility requirements, available dollars, application and awards process, range of and average award amounts, related programs, and examples of previously funded activities. Grants.gov: The purpose of this website is to provide a vehicle for organizations to search and apply for competitive federal discretionary grant opportunities. Information provided by EPA includes program descriptions, eligibility requirements, evaluation criteria, and application procedures. USAspending.gov: The purpose of this website is to provide a publicly accessible, searchable website for tracking where and how federal money is spent. Information provided by EPA includes grantee names and locations, project descriptions, and individual grant amounts awarded. EPA has its own public website on which it provides links to the above three government-wide websites. EPA also makes the following other information on discretionary grants available on its website: EPA Grant Awards Database: The purpose of this database is to provide a summary record for EPA grants awarded in the last 10 years and prior grants that are still open. Information includes grantee names and types, project descriptions, EPA contacts, and cumulative dollar amounts (i.e., new awards plus any increases or decreases from amendments) awarded over the life of a grant. Unofficial reports: The purpose of these reports is to provide summary information about grant competitions conducted during a fiscal year. Information includes competition titles, announcement numbers, closing dates, numbers of applications received, and grantee names. EPA collects information for these unofficial reports quarterly. According to OGD officials, developing the Grant Awards Database and posting unofficial reports on EPA’s public website were key parts of EPA’s efforts to respond to feedback from congressional staff and others that EPA should be more transparent about its awards process for discretionary grants so that these efforts can be monitored. OGD officials said that they created the Grant Awards Database about 10 years ago in response to a request from congressional staff that EPA provide a public database with information on grants awarded. OGD officials stated that they started posting the unofficial reports online at about the same time. Information on EPA discretionary grants on the four publicly available websites is either difficult to identify or incomplete for several reasons. First, while one of the main purposes of Grants.gov is to provide public information about competitive grant opportunities, the website includes information only about opportunities for open competition, not simplified competition, exceptions to competition, or exemptions from competition. In addition, information is difficult to identify partly because USAspending.gov and the EPA Grant Awards Database do not have a way to search for discretionary grants. Further, although CFDA.gov has a search field for grant types and “discretionary grant” is a second-tier grant type that users can choose to search for, EPA does not flag discretionary grants in the information it submits for CFDA.gov. Consequently, when users search for EPA discretionary grants on CFDA.gov, they get no results. OGD officials stated that they do not flag discretionary grants in the information they submit for CFDA.gov because several of the available second-tier grant types, such as fellowships or cooperative agreements, could simultaneously apply to the same discretionary grant program, and the CFDA.gov template for submitting information allows them to identify only one second-tier grant type. As a result, they said they do not flag any of these second-tier grant types because prioritizing one over the others would mean excluding options that also apply, which could confuse users. OGD officials stated that changing this would depend on the agency making a policy decision to select discretionary grant as the second-tier grant type when submitting information for CFDA.gov. The Uniform Guidance states that, when agencies submit information for CFDA.gov, they must identify whether a program makes awards on a discretionary basis. In addition, EPA’s CFDA user manual directs program and regional offices to distinguish discretionary grants from other types of grants in their CFDA submissions. According to EPA officials, EPA has complied with the Uniform Guidance and user manual, in part, by using only two primary-tier grant types—formula grant or project grant—to flag its grants in the information it submits for CFDA.gov. EPA officials stated that, since formula grants are nondiscretionary by definition, this approach signals that discretionary grants would be found under the project grant type, by default. EPA officials said they also include competition instructions in the narrative of their CFDA submissions for competitive discretionary grant programs, which further distinguishes them as discretionary. However, according to EPA officials, this method is not entirely sufficient because it identifies discretionary grants indirectly, thus requiring users to understand the difference between formula and project grants when searching for discretionary grants. In addition, the project grant type is not exclusively for discretionary grants; for example, some formula grants, such as EPA’s State Indoor Radon Grants, are flagged as project grants on CFDA.gov. According to OGD officials, grant types have not always been clear to the program and regional staff responsible for preparing CFDA submissions. OGD officials told us that, as a result of our work for this review, they realized that they needed to improve how they identify discretionary grants in their CFDA submissions. To do so, OGD officials stated they are planning to specifically identify discretionary grant programs in the narrative descriptions of their future CFDA submissions by including a sentence explaining that the program generally makes awards on a discretionary basis. OGD officials also said that, through their ongoing participation in a working group coordinated by the General Services Administration (GSA), they plan to work to clarify GSA’s government-wide guidance on identifying discretionary grants in agencies’ CFDA submissions. Even if EPA were to flag discretionary grants in the information it submits for CFDA.gov, however, identifying such grants on CFDA.gov would be just one of several steps that users would have to take to obtain more complete information about EPA discretionary grants, since information about EPA grants is spread across different websites. Specifically, to obtain a range of information about discretionary grants, including program descriptions, eligibility requirements, application procedures, grantees, and award amounts, users would first have to identify discretionary grants on CFDA.gov and obtain the CFDA numbers. These CFDA numbers are the only way to link information across the other three websites, and they are the only way to identify discretionary grants on USAspending.gov and the EPA Grant Awards Database. Users would have to enter the CFDA numbers on USAspending.gov to obtain information on individual awards, including amendments, and on the EPA Grant Awards Database for cumulative awards over the life of a grant. According to OGD officials, the award amounts cannot be compared for the same grant across USAspending.gov and the EPA Grant Awards Database because USAspending.gov reports individual amounts awarded—i.e., for a new award or each amendment made—on a specific date, whereas the EPA Grant Awards Database reports the total amounts awarded cumulatively—i.e., for a new award plus or minus any amendments—over the life of a grant, which may span many fiscal years. According to OGD officials, EPA’s unofficial reports on grant competitions are the only publicly available source of information about the number of applications received for discretionary grant competition opportunities. However, our review of these reports found that they are not current, and they contain limited information. EPA’s current internal grants management system cannot provide the type of information included in these reports because it does not have the capability to centrally track the number of applications received per competition opportunity. Instead, the Grants Competition Advocate’s Office collects the information manually from each program and regional office. OGD officials stated that this approach takes time and means that a report for a particular fiscal year may not be complete until a year or two later because the information is updated on a rolling basis, as it becomes available. In addition, the information for these reports is not collected until all the awards for a particular competition opportunity have been made, and, according to OGD officials, it may take more than a year to complete the award process. We also found that these reports contain limited information. For example, they do not include such key information as award amounts, grantee types, or amendments. Under federal standards for internal control, agencies are to communicate complete and accurate information internally and externally to achieve the entity’s objectives. EPA is transitioning to a new internal grants management system that will offer capabilities to collect more information and to collect it more quickly, according to OGD officials. These officials expect the new system to be fully operational in 2018. The new system will provide EPA with the capability to more easily collect and use timely and complete information about the agency’s discretionary grants, which will facilitate internal oversight and management, according to EPA officials. However, officials added that the agency does not currently have plans to use this new system to improve the timeliness and quality of the reports it makes publicly available on its website. By making more complete information about its discretionary grants publicly available—such as by posting timely and complete reports on its website—EPA could help Congress and other decision makers better monitor, and thus provide oversight of, its management of discretionary grants. In conducting this review, we asked EPA to provide its internal data on all discretionary grants awarded from fiscal years 2013 through 2015; however, EPA could not readily provide data about these grants because it could not easily identify them. OGD officials told us that they had to manually review the agency’s CFDA program descriptions in order to identify all the discretionary grants and respond to our data request. They stated that EPA’s internal grants management system was not designed to collect and track this information. Although EPA’s internal grants management system includes a data field for distinguishing grant types, including discretionary grants, from one another, the field is not being used consistently to identify all EPA discretionary grant programs, according to OGD officials. These officials explained that some discretionary grants were flagged in EPA’s internal grants management system as other types of grants, such as categorical grants, which may have some discretionary aspects to them. The officials stated that EPA staff may not have a clear understanding of how to use the data field, and one reason for this may be that OGD’s definition of discretionary grants is not clear, in part because it does not explain whether categorical grants with discretionary aspects are considered to be discretionary grants. According to OGD officials, some categorical and discretionary grant programs can have overlapping aspects. Another reason EPA staff may not have a clear understanding of how to use the data field may be that EPA’s guidance for CFDA.gov provides a definition of discretionary grants that differs slightly from OGD’s, and inconsistencies among these definitions could create ambiguity for staff. For instance, OGD’s definition states that a discretionary grant is one for which EPA has discretion in negotiating and approving the work plan, whereas the definition in EPA’s guidance for CFDA.gov does not discuss grants for which EPA has discretion over work plans. Under federal standards for internal control, management should design control activities to achieve objectives and respond to risks—for example, by clearly documenting internal control in management directives, administrative policies, or operating manuals. While EPA has documented its guidance for CFDA.gov, it is not clear because there are inconsistencies between the definition of discretionary grants in the guidance and OGD’s definition. OGD officials stated that, in response to our review, they provided the list of active discretionary grant programs to all program and regional offices to help them better identify discretionary grants in EPA’s internal grants management system. OGD officials also posted the list of discretionary grant programs to their intranet site so that program and regional offices could access it at any time. In addition to these steps, by having clear guidance on identifying discretionary grants generally—such as how to flag categorical grants with discretionary aspects and how to reconcile inconsistencies among EPA’s two definitions of discretionary grants— staff might be able to better identify all discretionary grants in the internal grants management system, especially discretionary grant programs developed in the future. Such guidance would also help staff update information for ongoing grants made under programs that are now inactive (i.e., no longer making new awards). By providing clear guidance to EPA staff to help ensure that they correctly identify all discretionary grants in the agency’s grants management system, EPA could communicate more accurate and complete information to internal and external decision makers and improve the quality of the information it makes publicly available about its use of taxpayer dollars. Over the years, EPA has taken steps to improve competition for its discretionary grants in response to our past reports and other reviews identifying challenges in how EPA manages such grants. These steps include updating EPA’s competition policy for awarding grants, creating a senior-level Grants Competition Advocate to help offices implement the policy, and making some discretionary grants information publicly available so that EPA’s management efforts can be monitored. However, the information EPA makes publicly available is neither easy to identify nor complete. EPA has faced challenges identifying the full universe of its discretionary grants. Until recently, EPA did not have complete information about which of its grants are discretionary because staff were not consistently distinguishing discretionary grants in EPA’s internal grants management system. EPA has manually reviewed its CFDA descriptions to develop a complete list of its active discretionary grant programs. Moving forward, this information can help officials provide clearer guidance to program and regional staff to help ensure they correctly identify programs in the internal grants management system. This information can also help inform guidance on how to update information for ongoing grants made under programs that are no longer active. Improving how it identifies discretionary grants internally will allow EPA to provide more complete information to internal decision makers and improve the information it makes publicly available. In addition, our review of EPA’s unofficial reports on grant competitions— the only publicly available source of information about the number of applications received for discretionary grant competition opportunities— found that they are not current and they contain limited information. Although EPA is updating its internal grants management system with capabilities to collect and report more timely and complete information about discretionary grants, the agency has no plans to use the system to improve the timeliness and quality of the grants information it makes publicly available on its website. By making more complete information about its discretionary grants publicly available—such as by posting timely and complete reports on its website—EPA could help Congress and other decision makers better monitor, and thus provide oversight of, its management of discretionary grants. We are making two recommendations: To improve the quality of EPA’s internal records and the information EPA can communicate to internal and external decision makers, the EPA Administrator should direct the Assistant Administrator for the Office of Administration and Resources Management to direct the Director of OGD to provide clear guidance to EPA staff to help ensure that staff correctly identify all EPA discretionary grant programs in the agency’s internal grants management system. To better enable Congress and other decision makers to monitor EPA’s management of discretionary grants, the EPA Administrator should direct the Assistant Administrator for the Office of Administration and Resources Management to direct the Director of OGD to determine how to make more complete information on EPA’s discretionary grants publicly available, such as by posting timely and complete reports on its website. We provided a draft of this report to EPA for review and comment. In its written comments, reproduced in appendix V, EPA agreed with our two recommendations and generally agreed with our findings and conclusions. EPA stated that it agrees that there are opportunities to explore how to better develop guidance for tracking grants and determine how to make more complete information on discretionary grants publicly available and, as noted in this report, has already taken steps to do so. EPA stated that it will continue these efforts in 2017, subject to budgetary and resource constraints. EPA also provided technical comments, which we incorporated into the report, as appropriate. To address our first recommendation, in addition to actions it described having taken, EPA stated that it expects to be involved in GSA efforts in 2017 to improve CFDA descriptions, which may relate to changes to the CFDA templates that could improve discretionary grant designations. EPA stated that also in 2017 the agency will assess whether other actions need to be taken to better identify discretionary grant programs in its internal grants management systems, including training for grants personnel to ensure consistency in defining discretionary grant programs. To address our second recommendation, EPA stated that in 2017 the agency will begin to examine whether and how it can use its new internal Next Generation Grants System to generate more timely and complete reports related to discretionary grants and make them publicly available. EPA also stated that at the outset the agency plans to explore the system’s ability to (1) generate more timely and complete information that can be posted on the EPA website, such as on applications received, and (2) post an annual report on the amount of funds per discretionary grant program and whether they were new awards or amendments. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the appropriate congressional committees, the EPA Administrator, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3841 or gomezj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix VI. EPA Needs to Improve STAR Grant Oversight. Report No. 13-P-0361. Washington, D.C.: August 27, 2013. EPA’s Key Management Challenges. Washington, D.C.: April 21, 2006. EPA Managers Did Not Hold Supervisors and Project Officers Accountable for Grants Management. Report No. 2005-P-00027. Washington, D.C.: September 27, 2005. EPA’s Key Management Challenges 2005. Washington, D.C.: April 25, 2005. EPA Needs to Compete More Assistance Agreements. Report No. 2005- P-00014. Washington, D.C.: March 31, 2005. EPA’s Key Management Challenges. Washington, D.C.: April 21, 2004. EPA’s Key Management Challenges. Washington, D.C.: May 22, 2003. EPA’s Key Management Challenges. Washington, D.C.: September 6, 2002. Surveys, Studies, Investigations, and Special Purpose Grants. Report No. 2002-P-00005. Philadelphia, PA: March 21, 2002. EPA’s Key Management Challenges. Washington, D.C.: December 17, 2001. EPA’s Competitive Practices for Assistance Awards. Report No. 2001-P- 00008. Philadelphia, PA: May 21, 2001. In this report, we examine (1) how EPA manages competition for its discretionary grants, (2) how much in discretionary grants EPA provided from fiscal years 2013 through 2015 and to what types of grantees, and how much of that was competitively awarded, and (3) what information EPA makes publicly available on discretionary grants. To examine how EPA manages competition for its discretionary grants, we reviewed relevant statutes and regulations, EPA’s competition policy, and EPA’s procedures and guidance for managing grants competition. We also examined fiscal year 2013 through 2015 annual competition effectiveness reviews and office competition assurances for program and regional offices for fiscal years 2014 through 2015. We reviewed EPA decisions for grant eligibility and evaluation disputes from May 2004 through March 2016, which includes every year EPA has issued these dispute decisions, according to EPA Office of Grants and Debarment (OGD) officials. We also reviewed the Grants Competition Advocate Office’s dispute decision matrix, which includes summary information on all formal disputes filed from May 2004 to March 2016. In addition, we assessed the extent to which a nongeneralizable sample of competitive discretionary grant announcements met key EPA criteria for preparing such announcements in the competition policy and OGD’s checklist for preparing announcements by selecting and reviewing all of the 12 active announcements, prepared by nine different program and regional offices, available on Grants.gov on April 27, 2016. To do so, two analysts reviewed the extent to which the announcements included the dozens of elements that the competition policy and checklist direct them to include. The analysts then discussed and compared results to resolve any differences in their assessments. In addition, we reviewed internal documentation for the eligibility and evaluation criteria reviews for a nongeneralizable sample of two discretionary grant competition opportunities managed by the Office of Research and Development (ORD) and three discretionary grant competition opportunities managed by the Region 9 Office. We selected these offices, in part, for geographic diversity and because they are responsible for some of the largest portions of discretionary grant dollars and awards among program and regional offices. We selected the most recently closed discretionary grant competition opportunities managed by each office, according to EPA’s unofficial reports on grant competitions. The internal documentation for the eligibility and evaluation criteria reviews included conflict-of-interest statements, reviewer instructions, eligibility reviews, reviewer scoresheets, reviewer comments, and funding recommendations. Our findings cannot be generalized to all EPA discretionary grant competition opportunities, but they do provide us with examples of key steps in EPA’s process for managing discretionary grants. To examine how much in discretionary grants EPA provided and competitively awarded from fiscal years 2013 through 2015 and to what types of grantees, we reviewed EPA’s competition policy and grants management plan. We also analyzed EPA internal data on discretionary grants awarded from fiscal years 2013 through 2015, including types of grantees, award amounts, whether grants were awarded as new awards or amendments to awards, and whether grants were awarded competitively or noncompetitively. In response to our data request, EPA obtained these data from its Integrated Grants Management System, as of May 6, 2016. According to EPA, the data could change over time as offices make corrections or adjustments. In order to assess the reliability of the data we analyzed, we reviewed database documentation; interviewed EPA officials familiar with the data; and conducted electronic tests of the data, looking for missing values, outliers, or other anomalies. We determined that the data were sufficiently reliable for our purposes. In addition, EPA officials reviewed and verified our data analysis results. We also analyzed information on the number of applications received in EPA’s unofficial reports on grant competitions from fiscal years 2013 through 2015, as of May 11, 2016. To examine what information EPA makes publicly available on discretionary grants, we reviewed relevant statutes and regulations, EPA’s competition policy, and EPA’s procedures and guidance for making information publicly available on grants. We also reviewed information on four publicly accessible websites—CFDA.gov, USAspending.gov, Grants.gov, and the EPA Grant Awards Database—on EPA discretionary grants from fiscal years 2013 through 2015 and compared it with EPA’s internal data to assess the extent to which information on EPA discretionary grants was readily available from publicly available sources. In addition, we interviewed EPA officials responsible for posting and maintaining the information EPA makes publicly available on the EPA Grant Awards Database and the information EPA submits to be made publicly available on CFDA.gov, USAspending.gov, and Grants.gov. We compared EPA guidance and the information EPA makes publicly available on discretionary grants with federal standards for internal control to assess the extent to which EPA follows principles for designing control activities and principles for information and communication. We also analyzed applicant information in EPA’s unofficial reports on grant competitions from fiscal years 2013 through 2015, as of May 11, 2016. To address all three objectives, we reviewed our reports and those of the EPA Office of Inspector General that identified challenges with, or made recommendations for improving, EPA’s management of discretionary grants. In addition, we interviewed officials in OGD, ORD, the Region 3 Office, and the Region 9 Office about how they manage and make information publicly available on discretionary grants. We conducted this performance audit from December 2015 to January 2017 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. This appendix displays the inventory of 67 active discretionary grant programs EPA developed from its program descriptions in the Catalog of Federal Domestic Assistance (CFDA). For each program, table 8 shows the CFDA number, title, EPA program or regional office responsible for managing the program, and whether the program has an exemption from competition (i.e., the program is not subject to EPA’s competition policy). This appendix displays results from our analysis of EPA data on discretionary grants awarded from fiscal years 2013 through 2015. Tables 9 through 11 show the dollar amounts for the different types of new awards subject to the competition policy, by type of grantee. Table 12 shows the dollar amounts for new awards made as exemptions from competition (i.e., not subject to EPA’s competition policy), by type of grantee. Tables 13 through 15 show the dollar amounts of the different types of amendments to awards, by type of grantee. Table 16 shows the number of unique grantees receiving two or more new awards, by type of grantee, in fiscal years 2013, 2014, and 2015. Tables 17 and 18 show the combined dollar amounts for all new awards and amendments, by Catalog of Federal Domestic Assistance (CFDA) number and title, in order of total dollars and CFDA numbers, respectively. In addition to the individual named above, Janet Frisch (Assistant Director), Enyinnaya David Aja, Emily Christoff, Ellen Fried, Cindy Gilbert, Chad M. Gorman, Mitchell Karpman, and Jeanette Soares made key contributions to this report. Grants Management: EPA Could Improve Certain Monitoring Practices. GAO-16-530. Washington, D.C.: July 14, 2016. Grants Management: EPA Has Opportunities to Improve Planning and Compliance Monitoring. GAO-15-618. Washington, D.C.: August 17, 2015. Environmental Protection Agency: Progress Has Been Made in Grant Reforms, but Weaknesses Remain in Implementation and Accountability. GAO-06-774T. Washington, D.C.: May 18, 2006. Grants Management: EPA Has Made Progress in Grant Reforms but Needs to Address Weaknesses in Implementation and Accountability. GAO-06-625. Washington, D.C.: May 12, 2006. Grants Management: EPA Needs to Strengthen Efforts to Provide the Public with Complete and Accurate Information on Grant Opportunities. GAO-05-149R. Washington, D.C.: February 3, 2005. Grants Management: EPA Needs to Better Document Its Decisions for Choosing between Grants and Contracts. GAO-04-459. Washington, D.C.: March 31, 2004. Grants Management: EPA Needs to Strengthen Efforts to Address Management Challenges. GAO-04-510T. Washington, D.C.: March 3, 2004. Grants Management: EPA Needs to Strengthen Oversight and Enhance Accountability to Address Persistent Challenges. GAO-04-122T. Washington, D.C.: October 1, 2003. Grants Management: EPA Needs to Strengthen Efforts to Address Persistent Challenges. GAO-03-846. Washington, D.C.: August 29, 2003. Environmental Protection Agency: Problems Persist in Effectively Managing Grants. GAO-03-628T. Washington, D.C.: June 11, 2003. Major Management Challenges and Program Risks: Environmental Protection Agency. GAO-03-112. Washington, D.C.: January 1, 2003.
EPA annually awards hundreds of discretionary grants, totaling about $500 million. EPA has the discretion to determine grantees and amounts for these grants, which fund a range of activities, from environmental research to wetlands restoration. EPA awards and manages discretionary grants at 10 headquarters program offices and 10 regional offices. Past reviews by GAO and EPA's Inspector General found that EPA has faced challenges managing such grants, including procuring insufficient competition for them and providing incomplete public information about them. GAO was asked to review EPA's management of discretionary grants. This report examines (1) how EPA manages competition for discretionary grants, (2) how much in discretionary grants EPA provided from fiscal years 2013 through 2015 and to what types of grantees, and (3) the information EPA makes publicly available on discretionary grants. GAO reviewed EPA's competition policy and guidance, examined internal evaluations of grant applications for competitions that were selected partly because they accounted for large portions of discretionary grant dollars, analyzed EPA data as well as information EPA made available on public websites, and interviewed EPA officials. The Environmental Protection Agency (EPA) manages competition for its discretionary grants through a process established by its competition policy and implemented by its program and regional offices. Under the policy, offices are to advertise discretionary grant opportunities on Grants.gov—a website for federal grant announcements—and may also advertise using other methods, such as trade journals and e-mail lists. The announcements must describe eligibility and evaluation criteria, and the process may be customized to assess (1) all applications against eligibility criteria and (2) eligible applications for merit against evaluation criteria. Under the policy, EPA established a Grants Competition Advocate, a senior official who provides guidance to and oversight of the offices. EPA officials said this position has been key to improving competition for discretionary grants. From fiscal years 2013 through 2015, EPA provided nearly $1.5 billion in discretionary grants to about 2,000 unique grantees, with state governments, nonprofits, and Indian tribes receiving the largest shares, according to GAO's analysis of EPA data. Of the $1.5 billion, $579 million was for new grants subject to the competition policy, and according to EPA, the agency met its performance target to competitively award at least 90 percent of these new grant dollars or awards annually. Some discretionary grants are not subject to the competition policy for several reasons—for example because they are available by law only to Indian tribes. Of the remaining approximately $920 million, $282 million was for new grants not subject to the competition policy, and about $632 million was for amendments to existing grants, such as for added work. Publicly available information from EPA about its discretionary grants is neither easy to identify nor complete. For example, different information about the grants, such as dollar amounts, is available at four federal websites; but three of these websites do not have a way to search all the grants, and the fourth cannot identify the grants because EPA does not flag them in its submissions to the website. EPA officials plan to better flag these grants in the future; however, to obtain complete information, users would still have to search several websites containing different parts of this information. Also, GAO found that the unofficial reports EPA makes publicly available on the number of applications received for its grant competitions contain limited information. Moreover, these reports are not current because EPA relies on manual processes to collect the information from its offices, which can cause reporting delays. Further, GAO found that although EPA's internal grants management system has a field for tracking grant types, a lack of clarity in EPA's guidance may contribute to EPA staff's inconsistent use of this field. Consequently, EPA cannot easily identify discretionary grants in its system or collect complete and accurate information on them. EPA is transitioning to a new system that is expected to be operational in 2018 and to provide the capability to collect more timely and complete information. However, EPA officials said they do not have plans to use the new system to improve their publicly available reports, which is inconsistent with effective internal and external communication suggested by federal internal control standards. More complete information could help Congress and other decision makers better monitor EPA's management of discretionary grants. GAO recommends that EPA develop clear guidance for tracking grants and determine how to make more complete information on discretionary grants publicly available. EPA agreed with GAO's recommendations.
Many individuals receiving monthly compensation and pension benefits from the VA have mental impairments that can prevent them from managing their finances. These conditions may result from injury, disease, or infirmities of age. The VA Fiduciary Program matches beneficiaries who are unable to manage their financial affairs with a fiduciary, giving preference to spouses. If VA is unable to locate a qualified spouse who is willing to serve in this capacity, an individual or other entity, such as a lawyer or nursing home, will be appointed. VA appointed fiduciaries who are not dependents or close family members can collect a fee for their services (generally up to 4 percent of a beneficiary’s annual benefit amount) and can oversee multiple beneficiaries. Whether a fiduciary is a family member or a professional, the responsibilities are generally the same and may include receiving the beneficiary’s VA benefits, paying the beneficiary’s expenses, maintaining the beneficiary’s budget, and generally seeing to the financial well-being—and, in some cases, the physical well- being—of the beneficiary. Finally, if a court has determined that a beneficiary is unable to handle his or her own affairs and appoints its own fiduciary, VA must assess the performance of that fiduciary to determine if he or she is suitable for managing VA benefits given the needs and welfare of the beneficiary. If VA decides to use the court-appointed fiduciary, the agency generally defers to certain rules set by the court, such as those pertaining to the fee amount that the fiduciary can charge for his or her services. Fiduciary Program policies and procedures are developed by Fiduciary Program Central Office staff under the Office of Policy and Program Management within the Veterans Benefits Administration (VBA). Individual Fiduciary Program units are generally colocated in VA regional offices that also oversee other VBA programs. One major exception to this is the Western Area Fiduciary Hub, where Fiduciary Program units and files from 14 western VA regional offices were merged into a single unit colocated in the VA regional office in Salt Lake City, Utah, beginning in January 2008. Our February 2010 report noted that VA Fiduciary Program staff did not always take required actions within established time frames or document in the case files that the required actions were taken. Below are four areas where program staff did not always comply with program policies and, per our recommendations, how VA plans to address them. Initial Visits to Beneficiaries and Fiduciaries. VA policy states that initial visits to appoint fiduciaries are to be conducted within 45 days of a request for a fiduciary, and VA’s performance goal is to conduct at least 90 percent of these visits on time. Conducting timely initial visits is important because beneficiaries cannot begin receiving VA benefits until they are completed. We sampled and reviewed 67 case files in which initial visits were supposed to be conducted between July 1, 2006, and June 9, 2009, and found that 37 visits were conducted within the 45-day time frame, and 10 were from 3 to 39 days late. For one case, the file lacked documentation that an initial visit was made at all. Managers and staff in some offices we visited said compliance with the timeliness policy for initial visits was improving, but was still a concern. They attributed some compliance issues to a continued lack of staff and resources. Follow-Up Visits to Beneficiaries and Fiduciaries. Once the fiduciary is selected, staff conduct periodic follow-up visits to re-evaluate the beneficiary’s condition and to determine if funds have been properly used and protected. The first routine follow-up visit generally takes place 1 year after a fiduciary is selected, and subsequent visits typically take place every 1 to 3 years thereafter. According to VA managers, it is VA’s policy that follow-up visits to fiduciaries are to be conducted within 120 days of the scheduled date, and the on-time goal for these visits is also 90 percent. Timely follow-up visits are important to determine the continued suitability of the fiduciary and to protect beneficiaries from potential misuse of their funds. Based on a nationwide sample of VA beneficiaries that had been assigned a fiduciary, we estimated that approximately 61,000 adult beneficiaries were supposed to have had at least one follow-up visit between July 1, 2006, and June 9, 2009. We estimated that 76 percent of these visits occurred within the 120-day time frame. In about 18 percent of the cases, however, VA did not conduct these required follow-up visits on time or provided insufficient documentation to show whether these visits were conducted at all. For the cases that were untimely (12 percent), they were between 1 to 216 days late. In the most extreme example among the cases with insufficient documentation to show whether visits were conducted (6 percent), the follow-up visit was overdue by 16 months. Similar to initial visits, program managers and staff noted that compliance with the 120-day time frame for follow-up visits can be challenging due in part to a lack of staff and time. Program managers said that conducting visits in a timely manner may be especially challenging in regional offices with only one or two Fiduciary Program staff who may also have responsibilities outside of the Fiduciary Program. In addition, managers and staff noted that conducting timely visits can be challenging in areas where staff must drive long distances to see beneficiaries and fiduciaries. Annual Financial Reports. VA policy generally requires staff to obtain yearly financial reports and bank statements from some fiduciaries to determine how beneficiary funds were used. When fiduciaries do not submit their financial reports on time, staff are required to follow-up with them and document such actions in the beneficiaries’ files. Staff can follow-up with letters, telephone calls, or face-to-face contacts. VA policy requires staff to conduct the first of such follow-up actions when fiduciary financial reports are 35 to 65 days late and again when they are 90 days late. At that time, they may inform the fiduciary of the possible repercussions of a failure to comply, which may include legal actions, a referral to the OIG, or other actions. After 120 days, the financial reports are considered “seriously delinquent,” and appropriate action is to be taken. Failure to take aggressive action to secure timely financial reports may result in a finding of negligence, which will require VA to re-issue any misused benefits. Based on our nationwide sample, we estimate that fiduciaries for about 33,000 beneficiaries were required to submit such reports between July 1, 2006, and June 9, 2009. Of these, 39 percent were submitted between 1 and 140 days late and 47 percent were submitted on time. In addition our sample and site visit file reviews showed that follow-up contact was frequently not done or not documented by program staff. Of the 30 case files in our sample where financial reports were submitted more than 65 days late, 19 case files either lacked documentation of any follow-up actions or showed that such actions were not taken within required time frames. Moreover, we found additional instances of inadequate staff follow-up on seriously delinquent financial reports during file reviews conducted at the three regional offices we visited. We reviewed 20 such cases, and found only 1 where the initial follow-up contact was taken within the required 65 days. For the other 19 cases, contact was either between 3 days and 11 months late or there was not adequate documentation to determine if or when such contact had occurred. In one case, a fiduciary’s financial report was submitted more than 2 years later than the original due date, and only after VA initiated action to suspend payment. In another case, a financial report due in June 2006 was not submitted until nearly 2 years later. The file did not indicate that any follow-up actions had occurred, although the case is now being investigated for possible misuse of funds. Staff in all regional offices we visited said that they sometimes did not take follow-up actions or failed to document actions they did take, in part, because they lacked the time or believed that some actions did not warrant documentation. Surety Bonds. VA generally requires staff to obtain a surety bond from fiduciaries overseeing estates with a value of $20,000 or more that is attributable to VA funds. A bond ensures that the beneficiary’s estate will be reimbursed in the event of fiduciary mismanagement or abuse of beneficiary funds. Our nationwide sample showed that program staff sometimes failed to obtain proof that a fiduciary purchased a bond, when required, or did not adequately document in the beneficiary case files that the bond requirement was waived. Of the 52 case files in our sample for which fiduciaries were required to purchase a bond, 8 case files lacked adequate documentation to indicate whether a bond was purchased or that the bond requirement was waived because the fiduciary met conditions for an exception. Some of the 8 cases involved substantial benefit amounts. For example, 2 cases which contained no documentation that bonds were purchased had VA estate values of approximately $82,000 and $62,000—leaving these beneficiaries and VA vulnerable to a substantial loss if funds were misused. Some staff in regional offices we visited said that they were often uncertain about what types of bonds are required for certain types of fiduciaries, and this was confirmed by our site visit file reviews. For example, in one case, a Fiduciary Program staff member was told by a fiduciary who was an attorney that an individual bond was unnecessary because the fiduciary had a “blanket” bond that covered all VA responsibilities. Although this staff member documented in the case file that he was unsure if this was correct, he took the fiduciary’s word that an additional bond was not required. However, we were told by managers and staff that a blanket bond was most likely not acceptable in this case, and the staff person should have required the fiduciary to obtain an individual bond. In regard to the above findings, we recommended that VA ensure that staff understand and carry out policies regarding file documentation, follow-up with fiduciaries for late financial reports, and bond acquisition. VA concurred and, in its comments to our report, outlined several planned actions. For example, VA stated that it would roll out additional training for staff in March of this year and expects to hold a manager’s training conference later in the year. The agency also intends to revise the program’s policy manual this year to clarify existing guidance, establish new policies and procedures, and enhance oversight of fiduciary activities. In addition to compliance issues, we identified weaknesses in VA’s policy for conducting periodic on-site reviews of professional fiduciaries who manage funds for multiple beneficiaries. Cumulatively, such benefits can be a substantial amount of money. On-site reviews examine the financial records across all beneficiaries that a professional fiduciary manages to detect discrepancies among accounts, which may not be detected by examining annual financial reports for a single beneficiary. We found two weaknesses associated with the on-site review policy VA developed. First, while VA is required to conduct periodic on-site reviews for professional fiduciaries who oversee more than 20 beneficiaries with combined benefits totaling $50,000 or more, the agency can not ensure that all fiduciaries who need these reviews are identified. To generate a list of fiduciaries meeting these criteria, each Fiduciary Program unit uses VA’s electronic case management system to link or match a fiduciary to all of their beneficiaries. This computer match is based on a fiduciary’s name, rather than a unique identifier, such as the fiduciary’s Social Security number (SSN) or tax identification number (TIN). However, if fiduciary names are entered inconsistently into the system, a fiduciary for which an on-site review is required may not be identified. While VA’s case management system includes a field for unique fiduciary identifiers, VA policy does not require this information for all fiduciaries. Central Office staff acknowledged that requiring a unique identifier would decrease VA’s chances of making mistakes in identifying fiduciaries with multiple beneficiaries who require reviews. In response to our recommendation, VA plans to begin requiring that all fiduciaries supply unique identifiers (such as SSNs or TINs) to better track fiduciaries who manage multiple beneficiaries. We also found that VA lacks a nationwide quality review process to ensure that on-site reviews are conducted properly and consistently. While VA has quality review processes to ensure that actions—such as conducting initial visits and obtaining financial reports and bonds—are carried out in accordance with VA policies, Central Office managers acknowledged that VA lacks a similar process for on-site reviews. Having such a process is not only a key internal control, but it is also important for ensuring that on-site reviews are conducted properly and consistently across all Fiduciary Program units nationwide. Our examination of 12 files from the three regional offices we visited revealed deficiencies in these exams which could be detected through a national quality review process. Four of the files we examined lacked key case selection information, preventing managers from determining whether they were selected according to VA policy—which states that cases associated with beneficiary complaints or a history of late or questionable financial reports should receive priority. In addition, although VA policy requires that at least 25 percent of a fiduciary’s beneficiary case files (or up to 25 case files) be examined during the on-site reviews, we found that this threshold was not met in four instances. At the time of our review, Central Office staff tracked whether on-site reviews were completed; but, not whether they were conducted in accordance with policy. In response to our recommendation, VA noted that they recently began reviewing all completed on-site reviews to ensure that they conform to program policy and procedures. We identified two key challenges that limit VA’s ability to improve Fiduciary Program performance and oversight. First, VA’s electronic fiduciary case management system does not provide sufficient information to managers and staff about their cases, and it is cumbersome to use. Second, some managers and staff may not have received sufficient training to ensure that they have the necessary expertise to effectively monitor individual fiduciaries and oversee the program. VA is taking steps to build expertise about the case management system and the program itself by developing additional standardized training and piloting a consolidated Fiduciary Program unit covering 14 western VA regional offices. VA’s Electronic Fiduciary Case Management System. The Fiduciary Beneficiary System (FBS), VA’s electronic fiduciary case management system, does not provide sufficient data to effectively manage the Fiduciary Program. Although it does provide some useful information on individual case files, pending workloads, and program performance, several system limitations hamper its ability to maintain accurate and timely data and provide management with quality information about the program. FBS data fields are configured to track a fixed number of pending activities, which can limit the accuracy and completeness of information in the system. Staff and managers in the three regional offices we visited said they often need to track more upcoming actions than FBS permits. For example, staff noted that FBS accepts only one due date for upcoming financial reports, even though multiple financial reports may be due simultaneously if one or more is late. In such cases, the due date for the most recent overdue report overrides the older due date, even if the older financial report has not yet been submitted. To compensate for this FBS limitation, staff may track pending actions manually outside of the system or keep personal notes as reminder. In addition, some managers find that FBS management reports are not always easy to generate or helpful in overseeing the program. For example, one manager told us that monitoring staff performance was difficult because the system does not generate a single report that shows all upcoming work that staff need to conduct over a certain period of time. Instead, several reports need to be generated and cross-referenced, which can be cumbersome. In addition, FBS does not store historical information beyond 30 days which would allow managers to examine past issues with fiduciaries or staff performance. For example, managers in two regional offices said that in order to look at historical information on seriously delinquent financial reports, they would have to manually examine monthly paper printouts generated in prior months by FBS, which can be time consuming. A 2007 internal VA report also stated that FBS requires extensive knowledge to use, which inhibits effective oversight and management at all levels of the program. Central Office managers acknowledged the shortcomings of FBS and in response to our recommendations said that they would create a work group to determine the feasibility of enhancing FBS or developing a new case management system. VA’s Fiduciary Program Training. Managers and staff in all three regional offices we visited said the Fiduciary Program is complex and requires a great deal of specialized knowledge to effectively monitor fiduciaries and provide program oversight. Although the Fiduciary Program has a policy manual to guide staff in carrying out their responsibilities, managers and staff said there are many nuances and exceptions that take time to master, particularly since each fiduciary and beneficiary situation may be different. In addition to these program complexities, managers in all of the regional offices we visited said that high staff turnover has contributed to a large number of inexperienced managers and staff in their Fiduciary Program units who need training. For example, in two of the three regional offices we visited, only about one-third of staff (15 out of 47) had more than 2 years of experience in the program. During our site visits we were told that limited training for managers and staff may have contributed to various program problems, including failures to properly monitor fiduciaries or document certain actions in beneficiary case files. VA has provided some training to ensure that Fiduciary Program managers and staff are proficient in carrying out their responsibilities, and some regional offices have developed their own training. VA provides a standardized computer-based training program for new staff who conduct visits to beneficiaries and fiduciaries and for those needing a refresher. Central Office managers and staff also said that they hold monthly teleconferences and conduct periodic visits to individual Fiduciary Program units to discuss selected topics. In addition, managers and staff in all three regional offices we visited said that they conduct their own weekly or biweekly training sessions on selected topics, such as how to determine whether bonds are required, and what kinds of situations constitute misuse. However, they noted that individual training occurs primarily on the job, and the effectiveness and consistency of such training depends on the expertise of staff conducting the training. Central Office managers acknowledged that standardized training would be beneficial and stated that they are increasing training for managers and staff beginning this year. VA’s Consolidation of Western Fiduciary Program Units. From January to September 2008, VA consolidated Fiduciary Program unit managers, staff, and files from 14 western VA regional offices into a single location in Salt Lake City, Utah—referred to as the Western Area Fiduciary Hub—to improve program performance and oversight. VA officials expect the hub to result in increased staff expertise, more consistent training, better leveraging of staff resources, and increased program efficiencies. For example, the hub created specific management positions for the Fiduciary Program and divided staff into teams to focus on specific actions and responsibilities in an effort to build program expertise, including expertise with FBS. In addition, the hub provides opportunities to train more staff at once, which could help to further build staff expertise and potentially increase the consistency of training. The hub also eliminated jurisdictional boundaries that prevented staff from conducting visits that were geographically close, but outside of their assigned area of responsibility, which VA expects will help balance workloads among staff and reduce travel time. Additionally, the hub moved from a paper based to an electronic case file system, called Virtual VA, in an attempt to more efficiently transfer information between Salt Lake City hub staff and the staff conducting visits in other offices. While some VA managers and staff in the hub believe consolidation can help improve Fiduciary Program performance, they described some challenges that have impeded effective implementation of the pilot project. The hub’s managers explained that there had been multiple changes in management and that implementation began before appropriate planning and resources were in place. For these reasons, hub managers did not consider the hub to be fully functional until January 2009, which was approximately 1 year after it opened. During our July 2009 visit to the hub, managers and staff mentioned such unforeseen difficulties as: (1) inconsistent access was granted into Virtual VA; (2) paper documents were being scanned into the wrong electronic beneficiary case file and (3) substantial amounts of time were being spent updating old cases that had been improperly maintained by the previous Fiduciary Program units. For some improperly maintained cases, staff had not taken required actions to address seriously delinquent financial reporting and potential misuse of funds had gone unidentified for significant periods of time. This required hub staff to perform necessary follow-up actions, in addition to completing incoming new work. Managers and staff noted that they have gained valuable insight and knowledge in implementing the hub that could help inform future office consolidations. At the time of our review, the hub was still undergoing multiple changes and had not yet been evaluated, thus it was unclear whether consolidation of Fiduciary Program units has improved program performance and oversight. In response to our recommendation that the Central Office evaluate the performance of the hub, VA responded that it anticipates completing such an evaluation by September 2010. One of VA’s most vulnerable populations—beneficiaries who are unable to manage their own financial affairs—rely on VA’s Fiduciary Program to ensure that their benefits are safeguarded. To better serve beneficiaries and protect their benefits, VA has taken or plans to take a number of actions intended to increase staff understanding and compliance with polices as well as enhance program oversight. Revising program policies and procedures, increasing training, evaluating alternatives to the program’s case management system, and evaluating the Western Area Fiduciary Hub are important steps. However, in order for these actions to successfully address the longstanding shortcomings we and others have identified, VA management must pay sufficient attention to this program, including exercising adequate oversight of its staff. Absent sustained management guidance and staff compliance, beneficiaries may remain vulnerable to the consequences of fiduciaries misusing their funds. Mr. Chairman, this concludes my prepared statement. I would be pleased to answer any questions that you or other Members of the Subcommittee may have. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The Department of Veterans Affairs (VA) pays billions of dollars in compensation and pension benefits to disabled veterans and their dependents. For those beneficiaries who are unable to manage their own affairs, VA appoints a third party, called a fiduciary, to manage their VA funds. Congress, VA's Office of Inspector General (OIG) and GAO have noted that VA does not always have, or adhere to, effective policies for selecting and monitoring fiduciaries and therefore, does not fully safeguard the assets of beneficiaries in the Fiduciary Program. GAO was asked to discuss the Fiduciary Program and possible ways that it could be improved to better serve veterans, their families, and survivors. This statement is based on GAO's February 2010 report (GAO-10-241), which examined (1) VA policies and procedures for monitoring fiduciaries and safeguarding beneficiary assets and (2) challenges VA faces in improving program performance and oversight. To conduct that work, GAO reviewed program policies and relevant federal laws and regulations, analyzed a nationally representative random sample of case files, interviewed Central Office managers and staff, and conducted three site visits to Fiduciary Program offices, which accounted for 25 percent of program beneficiaries. Inconsistent staff compliance with some Fiduciary Program policies and weaknesses in others hinder VA's ability to effectively safeguard beneficiary assets; however, per GAO's recommendations, VA plans to take steps to improve the program. GAO found that VA did not always take required actions to monitor fiduciaries within established time frames or document in the beneficiary's case file that these actions were taken. Inconsistent staff compliance occurred in four areas: (1) initial visits to beneficiaries and fiduciaries, (2) follow-up visits to beneficiaries and fiduciaries, (3) follow up to obtain annual financial reports, and (4) oversight of surety bonds. For example, in about 18 percent of the cases GAO reviewed, VA was late in conducting required follow-up visits to monitor fiduciaries or did not provide sufficient documentation to show whether these visits were conducted. Similarly, while GAO estimated that about 39 percent of fiduciaries did not submit required annual financial reports on time, VA staff did not consistently follow-up with fiduciaries or failed to document actions that were taken. In addition to compliance issues, VA's policies for conducting on-site reviews of professional fiduciaries who manage funds for multiple beneficiaries do not ensure that these fiduciaries are effectively identified and monitored. For example, the agency's case management system uses the fiduciary's name - which may be entered inconsistently - to match them to beneficiaries, rather than requiring a unique identifier, such as a Social Security number. As a result, VA cannot always identify the fiduciaries that need to be reviewed. Moreover, VA does not have a nationwide quality review process to ensure that on-site reviews are conducted properly and consistently. Per GAO's February 2010 report recommendations, VA agreed to revise its Fiduciary program policies in an effort to enhance its oversight role, increase staff understanding and staff compliance, and better safeguard beneficiary assets. Two key challenges hinder VA's ability to improve Fiduciary Program performance and oversight, but VA has plans to address these challenges. First, managers and staff said that limitations with VA's electronic fiduciary case management system hinder their ability to capture key information. Per GAO's recommendation, VA has established a work group to evaluate alternative system modifications to meet the program's case management needs. Second, managers and staff indicated that training may not be sufficient to ensure that they have the expertise to properly carry out program responsibilities, as many of them had less than 2 years of program experience. In its response to GAO's recommendations, VA stated that it would begin providing additional standardized training for managers and staff this year. VA is also piloting a consolidated Fiduciary Program unit covering 14 VA regional offices to improve program performance and oversight. VA encountered a number of challenges during the pilot's implementation and has not yet evaluated it, but per our recommendation, plans to do so by September of this year.
The federal government’s increasing demand for IT has led to an increase in the number of federal data centers and a corresponding increase in operational costs. According to OMB, the federal government reported 432 data centers in 1998, 2,094 in July 2010, and 9,995 in August 2016. Operating such a large number of centers has been and continues to be a significant cost to the federal government, including costs for hardware, software, real estate, and cooling. For example, in 2007, the Environmental Protection Agency (EPA) estimated that the electricity costs to operate federal servers and data centers across the government were about $450 million annually. According to the Department of Energy (Energy), a typical data center has 100 to 200 times the energy use intensity of a commercial building. In 2009, OMB reported that server utilization rates as low as 5 percent across the federal government’s estimated 150,000 servers were a factor driving the need to establish a coordinated, government-wide effort to improve the efficiency, performance, and environmental footprint of federal data center activities. Concerned about the size of the federal data center inventory and the potential to improve the efficiency, performance, and the environmental footprint of federal data center activities, OMB, under the direction of the Federal CIO, established FDCCI in February 2010. This initiative’s four high-level goals were to promote the use of “green IT” by reducing the overall energy and real estate footprint of government data centers; reduce the cost of data center hardware, software, and operations; increase the overall IT security posture of the government; and shift IT investments to more efficient computing platforms and technologies. As part of the initiative, OMB required the 24 agencies to identify a data center consolidation program manager to lead the agency’s consolidation efforts. In addition, agencies were required to submit an asset inventory baseline and other documents that would result in a plan for consolidating their data centers. The asset inventory baseline was to contain detailed information on each data center and identify the consolidation approach to be taken for each one. It would serve as the foundation for developing the final data center consolidation plan. The data center consolidation plan would serve as a technical road map and approach for achieving the targets for infrastructure utilization, energy efficiency, and cost efficiency. In October 2010, OMB reported that all of the agencies had submitted an inventory and plan. OMB also clarified the definition of a data center and noted that, for the purposes of FDCCI, a data center is defined as any room used for the purpose of processing or storing data that is larger than 500 square feet and meets stringent availability requirements. Under this definition, OMB reported that agencies had identified 2,094 data centers as of July 2010. “…a data center is…a closet, room, floor, or building for the storage, management, and dissemination of data and information and computer systems and associated components, such as database, application, and storage systems and data stores [excluding facilities exclusively devoted to communications and network equipment (e.g., telephone exchanges and telecommunications rooms)]. A data center generally includes redundant or backup power supplies, redundant data communications connections, environmental controls…and special security devices housed in leased, owned, collocated, or stand-alone facilities.” Under the new definition, OMB estimated that there were a total of 3,133 federal data centers in December 2011, and its goal was to consolidate approximately 40 percent, or 1,253 data centers, for a savings of approximately $3 billion by the end of 2015. See figure 1 for an example of an image of data center server racks at the Social Security Administration’s (SSA) National Support Center. In March 2012, OMB launched the PortfolioStat initiative, which requires agencies to conduct an annual agency-wide IT portfolio review to, among other things, reduce commodity IT spending and demonstrate how its IT investments align with the agency’s mission and business functions. PortfolioStat is designed to assist agencies in assessing the current maturity of their IT portfolio management process, make decisions on eliminating duplication, and move to shared solutions in order to maximize the return on IT investments across the portfolio. Subsequently, in March 2013, OMB issued a memorandum that documented the integration of FDCCI with PortfolioStat and stated that agencies should focus on an enterprise-wide approach to address commodity IT (including data centers) in a comprehensive manner. The memorandum also discussed consolidating previously collected IT-related plans, reports, and data submissions. For example, agencies were no longer required to submit the data center consolidation plans previously required in 2012. However, OMB required agencies to update their data center inventories and report on consolidation progress at the end of every quarter. OMB’s 2013 memorandum also increased the focus on optimizing the performance of federal data centers. Specifically, OMB stated that, to more effectively measure the efficiency of an agency’s data center assets, agencies would also be measured by the extent to which their primary data centers were optimized for total cost of ownership by incorporating metrics for data center energy, facility, labor, and storage, among other things. Subsequently, in May 2014, OMB issued memorandum M-14-08, which established a set of data center optimization metrics to measure agency progress. In addition, OMB established target values that agencies were expected to achieve by the end of fiscal year 2015. Recognizing the importance of reforming the government-wide management of IT, Congress enacted FITARA in December 2014. Among other things, the law includes a number of requirements related to federal data center consolidation and optimization: Agencies shall submit to OMB a comprehensive inventory of the data centers owned, operated, or maintained by or on behalf of the agency. Agencies shall submit a multi-year strategy to achieve the consolidation and optimization of the agency’s data centers no later than the end of fiscal year 2016. This strategy should include, for example, performance metrics that are consistent with the government-wide data center consolidation and optimization metrics. On a quarterly basis, agencies shall report to OMB’s Administrator of the Office of Electronic Government on progress towards meeting government-wide data center consolidation and optimization metrics. OMB’s Administrator of the Office of Electronic Government shall establish metrics applicable to the consolidation and optimization of data centers (including server efficiency), ensure that agencies’ progress toward meeting government-wide data center consolidation and optimization metrics is made publicly available, review agencies’ inventories and strategies to determine whether they are comprehensive and complete, and monitor the implementation of each agency’s strategy. Not later than December 19, 2015, OMB’s Administrator of the Office of Electronic Government shall develop and make publicly available, a goal, broken down by year, for the amount of planned cost savings and optimization improvements achieved through FDCCI and, for each year thereafter through October 1, 2018, compare reported cost savings and optimization improvements against those goals. The law’s data center consolidation and optimization provisions expire on October 1, 2018. In June 2015, OMB memorandum M-15-14 provided guidance for implementing FITARA and related IT management practices. OMB’s guidance includes several actions that agencies are to take to establish a basic set of roles and responsibilities (referred to as the “common baseline”) for CIOs and other senior agency officials that are needed to implement the authorities described in the law. For example, agencies are to conduct a self-assessment to identify where they conform to the common baseline and where they deviate. OMB guidance also requires agencies to annually update their self-assessments and report their progress in reaching FITARA implementation milestones. In August 2016, OMB issued a memorandum that established DCOI and included guidance on how to implement the data center consolidation and optimization provisions of FITARA. Among other things, the guidance requires agencies to consolidate inefficient infrastructure, optimize existing facilities, improve their security posture, and achieve cost savings. For example, agencies are required to maintain a complete inventory of all data center facilities owned, operated, or maintained by or on behalf of the agencies and measure progress toward defined optimization performance metrics on a quarterly basis as part of their data center inventory submissions. OMB’s August 2016 memorandum also revised the definition of a physical data center to include any room with at least one server that provides services (such as testing and development). Further, OMB’s guidance directed agencies to categorize their data centers as either a tiered data center or a non-tiered data center. OMB guidance defines a tiered data center as one that uses each of the following: a separate physical space for IT infrastructure, an uninterruptible power supply, a dedicated cooling system or zone, and a backup power generator for a prolonged power outage. According to OMB, all other data centers shall be considered non-tiered. Regarding data center optimization planning, the memorandum directs agencies to develop DCOI strategic plans that define their data center strategies for fiscal years 2016 through 2018. Among other things, this strategy is to include a timeline for agency consolidation and optimization activities with an emphasis on cost savings and optimization performance benchmarks the agency can achieve between fiscal years 2016 and 2018. For example, agencies are required to establish planned data center optimization milestones and report on progress toward achieving those milestones in their strategic plans. OMB required agencies to publicly post the plans to their agency-owned digital strategy websites by September 30, 2016, and to post subsequent strategic plan updates by April 14, 2017, and April 13, 2018. OMB also directed agencies to update their publicly available FITARA implementation milestone information to identify, at a minimum, five milestones per fiscal year to be achieved through DCOI. According to OMB, the DCOI milestones are expected to be updated quarterly as progress is achieved and are to be reviewed in quarterly meetings with OMB staff. Further, the memorandum states that OMB will report government-wide and agency-specific progress on the IT Dashboard—a public website that provides detailed information on major IT investments. According to OMB, this progress information is to include planned and achieved data center closures, consolidation-related costs savings, and data center optimization performance information. In this regard, OMB began including data center consolidation and optimization progress information on the Dashboard in August 2016. Moreover, OMB guidance includes a series of performance metrics in the areas of data center closures, cost savings, and optimization progress. Data center closures: Agencies are expected to close at least 25 percent of tiered data centers government-wide, excluding those approved as inter-agency shared services providers, by the end of fiscal year 2018. Further, agencies are to close at least 60 percent of non-tiered data centers government-wide by the end of fiscal year 2018. OMB’s guidance further notes that, in the long term, all agencies should continually strive to close all non-tiered data centers, noting that server rooms and closets pose security risks and management challenges and are an inefficient use of resources. Cost savings: Agencies are expected to reduce government-wide annual costs attributable to physical data centers by at least 25 percent, resulting in savings of at least $2.7 billion, by the end of fiscal year 2018. Data center optimization: Agencies are expected to measure progress against a series of new data center performance metrics in the areas of server utilization, energy metering, power usage, facility utilization, and virtualization. Further, OMB’s guidance establishes target values for each metric that agencies are to achieve by fiscal year 2018. To improve the measurement of data center optimization progress, OMB’s memorandum directs agencies to replace the manual collection and reporting of systems, software, and hardware inventory housed within data centers with automated monitoring, inventory, and management tools (e.g., data center infrastructure management) by the end of fiscal year 2018. According to OMB, these data center tools (henceforth referred to as “automated monitoring tools”) are to provide the capability to, at a minimum, measure progress toward server utilization and virtualization metrics. While implementation of automated monitoring tools is not required to be completed until the end of fiscal year 2018, the memorandum strongly encourages agencies to implement them throughout their data centers immediately. While OMB is primarily responsible for DCOI, its August 2016 memorandum designated the General Services Administration’s (GSA) Office of Government-wide Policy as a managing partner of the federal government data center line of business and data center shared services. More specifically, OMB’s memorandum states that this office is responsible for, among other things, providing guidance on technology advancements, innovation, cybersecurity, and best practices to data center providers and consumers of data center services. Further, the memorandum states that the office is responsible for assisting with creating and maintaining an inventory of acquisition tools and products related to data center optimization, including procurement vehicles for the acquisition of automated monitoring tools. From July 2011 through May 2017, we issued a number of reports and testified on agency efforts to consolidate and optimize federal data centers and achieve cost savings. For example, in September 2014, we reported that, while agencies had made progress on their consolidation efforts, the total number of data centers reported by agencies had continued to grow since 2011 as a result of OMB’s expanded definition and improved inventory reporting. More specifically, we determined that agencies had collectively reported 9,658 data centers in their inventories—an increase of about 6,500 compared to OMB’s previous estimate from December 2011. We noted that agencies had plans to close about 3,700 data centers by September 2015. We also reported that 19 of the 24 FDCCI agencies had collectively reported achieving an estimated $1.1 billion in cost savings for fiscal years 2011 through 2013, and that, by 2017, that figure was estimated to rise to about $5.3 billion. However, we pointed out that planned savings may be higher because 6 agencies—the Departments of Health and Human Services (HHS), Interior (Interior), Justice (Justice), and Labor (Labor), GSA, and the National Aeronautics and Space Administration (NASA)—that reported closing as many as 67 data centers had also reported limited or no savings. In addition, our 2014 report noted that 11 of the 21 agencies with planned cost savings had underreported their fiscal years 2012 through 2015 figures to OMB by approximately $2.2 billion. While several agencies noted communication issues as the reason for underreporting, others did not provide a reason. We concluded that, until agencies fully report their savings, the $5.3 billion in total savings would be understated. Further, we reported that OMB’s May 2014 data center optimization metrics did not address server utilization, even though OMB reported this to be as low as 5 percent across the federal government in 2009. We noted that, without this metric, OMB may lack important information on agencies’ progress. As a result, we recommended that it implement a metric for server utilization and assist six agencies in reporting their consolidation cost savings; we also recommended that agencies fully report their consolidation cost savings. OMB and the agencies to which we made recommendations generally agreed with them. OMB subsequently established a metric to measure agencies’ server utilization progress in its August 2016 memorandum. In March 2016, we reported that agencies had continued to make progress in their data center consolidation efforts. Specifically, we noted that agencies had reported closing 3,125 of the 10,584 total data centers as of November 2015. We further noted that 19 of the 24 agencies had reported achieving an estimated $2.8 billion in cost savings and avoidances from their data center consolidation and optimization effort for fiscal years 2011 through 2015. Agencies were also planning an additional $5.4 billion in cost savings and avoidances, for a total of approximately $8.2 billion, through fiscal year 2019. However, we noted that planned savings may be higher because 10 agencies that reported planned closures from fiscal years 2016 through 2018 had not fully developed their cost savings goals for these fiscal years. In addition, we reported that 22 agencies had made limited progress against OMB’s fiscal year 2015 data center optimization performance metrics, such as the utilization of data center facilities. Accordingly, we recommended that the agencies take actions to complete their cost savings targets and improve optimization progress. Most agencies agreed with the recommendations or had no comments. Finally, in May 2017, we reported that agencies continued to consolidate their data centers, including closing 4,388 of the 9,995 total data centers as of August 2016. Figure 2 provides a summary of the total number of data centers and closures reported from 1998 through August 2016. However, we pointed out that agency progress in achieving savings had slowed and planned goals had been reduced. Specifically, 18 of the 24 agencies had reported achieving an estimated $2.3 billion in cost savings and avoidances from their data center consolidation and optimization efforts from the start of fiscal year 2012 to August 2016, which was about $451 million less than the total amount of achieved cost savings and avoidances that agencies reported to us in November 2015. In addition, agencies’ total planned cost savings of about $633 million were more than $3.4 billion less compared to the amounts that agencies reported to us in November 2015, and more than $2.1 billion less than OMB’s fiscal year 2018 cost savings goal of $2.7 billion. Our May 2017 report also identified weaknesses in agencies’ DCOI strategic plans. Of the 23 agencies that submitted their strategic plans at the time of our review, 7—the Departments of Agriculture (Agriculture), Education (Education), Homeland Security (DHS), and Housing and Urban Development (HUD); GSA; the National Science Foundation (NSF); and the Office of Personnel Management (OPM)—had addressed all five required elements of a strategic plan, as identified by OMB (such as providing information related to data center closures and cost savings metrics). The remaining 16 agencies either partially met or did not meet the requirements. We also pointed out that there were inconsistencies in the reporting of cost savings in the strategic plans of 11 agencies. We concluded that, until agencies address the weaknesses in their DCOI strategic plans, they may be challenged in implementing the data center consolidation and optimization provisions of FITARA. Accordingly, we recommended that OMB improve its oversight of agencies’ DCOI strategic plans and their reporting of cost savings and avoidances. We also recommended that 17 agencies complete the missing elements in their strategic plans and that 11 agencies ensure the reporting of consistent cost savings and avoidance information to OMB. Twelve agencies agreed with our recommendations, 2 disagreed, and 11 did not state whether they agreed or disagreed. The 2 agencies that disagreed— HUD and the Nuclear Regulatory Commission (NRC)—asserted that they had submitted complete strategic plans. After further review, we agreed that HUD had provided a complete plan and removed our recommendation. However, we determined that NRC’s plan was still incomplete and maintained that our recommendation was appropriate. As mentioned earlier, FITARA required OMB to establish metrics to measure the optimization of data centers, including server efficiency, and ensure that agencies’ progress toward meeting the metrics is made publicly available. Pursuant to FITARA, OMB’s August 2016 memorandum established a set of five data center optimization metrics intended to measure agency’s progress in the areas of server utilization and automated monitoring, energy metering, power usage effectiveness, facility utilization, and virtualization. According to OMB, the server utilization and automated monitoring metric applies to agency-owned tiered and non-tiered data centers, while the four remaining metrics apply to agency-owned tiered centers only. OMB’s memorandum also established a target value for each of the five metrics, which agencies are expected to achieve by the end of fiscal year 2018. OMB measures agencies’ progress against the optimization targets using the agencies’ quarterly data center inventory submission and publicly reports this progress information on its Dashboard. Table 1 provides a description of the data center optimization metrics and target values that agencies are expected to achieve by the end of fiscal year 2018. As of February 2017, 22 of the 24 DCOI agencies reported limited progress against OMB’s fiscal year 2018 data center optimization targets on the Dashboard. The remaining 2 agencies—Education and HUD— reported that they did not have any agency-owned data centers in their inventory and, therefore, did not have a basis to measure and report optimization progress. With regard to the data center optimization targets, the most progress was reported for the power usage effectiveness and virtualization metrics, with 5 agencies reporting that they had met OMB’s targets. However, 2 agencies or less reported meeting the target for energy metering, facility utilization, and server utilization and automated monitoring. Figure 3 summarizes the 24 agencies’ progress in meeting each optimization target, as of February 2017. Following the figure is a more detailed discussion of the progress of each of the 24 agencies. Among the 24 agencies, SSA and EPA reported the most progress by meeting three targets, 20 reported meeting one or none of the targets, and the remaining 2 agencies did not have a basis to report on progress because they did not have any agency-owned data centers. Of the 22 agencies reporting progress information, 9 were not able to report progress against either the server utilization metric or power usage effectiveness metric, or both, because they lacked the required monitoring tools to measure progress in these areas. OMB began requiring the implementation of these monitoring tools in August 2016; however, as of February 2017, these 9 agencies were not yet reporting implementation of the tools at any of their data centers. This issue is discussed in greater detailed later in this report. Table 2 lists the agencies that met or did not meet each OMB target. Agencies’ limited progress against OMB’s optimization targets is due, in part, to them not fully addressing our prior recommendations in this area. As noted earlier, in March 2016, we reported on weaknesses in agencies’ data center optimization efforts, including that 22 agencies did not meet OMB’s fiscal year 2015 optimization targets. We noted that this was partially due to the agencies facing challenges in optimizing their data centers, including their decentralized organizational structures that made consolidation and optimization difficult and competing priorities for resources. In addition, consolidating certain data centers was problematic because the volume or type of information involved required the data center to be close in proximity to the users. Accordingly, we recommended that the agencies take action to improve optimization progress, to include addressing any identified challenges. Most agencies agreed with our recommendations or had no comments. In response to our recommendation, 19 of the 22 agencies submitted corrective action plans to us that described steps they intended to take to improve their data center optimization efforts. Among these steps were developing internal scorecards to track and report on optimization progress, including progress at their component agencies, and launching more aggressive efforts to optimize data centers using virtualization and cloud computing solutions. While 2 of the 22 agencies—Education and HUD—are no longer subject to OMB’s optimization metrics based on OMB’s August 2016 memorandum and their current data center inventory, none of the remaining 20 agencies had fully addressed our recommendation as of May 2017. The importance of overcoming optimization challenges and addressing our prior recommendations is critical to the ability of agencies to implement the data center optimization provisions of FITARA and achieve OMB’s fiscal year 2018 optimization targets. Going forward, it will be important for the 19 agencies that have established corrective action plans to continue to execute them and monitor the impact of actions completed on their optimization progress. Until agencies fully implement our prior recommendations to address their challenges and improve optimization progress, they may be hindered in implementing the data optimization provisions of FITARA and OMB guidance intended to increase operational efficiency and achieve cost savings. Further, OMB may be challenged in demonstrating that DCOI is meeting its established objectives. In addition to reporting current optimization progress on the Dashboard, OMB requires agencies’ DCOI strategic plans to include, among other things, planned performance levels for fiscal years 2017 and 2018 for each optimization metric. However, according to the 24 agencies’ DCOI strategic plan information as of April 2017, most are not planning to meet OMB’s optimization targets by the end of fiscal year 2018. More specifically, of the 24 agencies, 5—the Department of Commerce (Commerce), EPA, NSF, the Small Business Administration (SBA), and the U.S. Agency for International Development (USAID)—reported plans to fully meet their applicable targets by the end of fiscal year 2018; 13 reported plans to meet some, but not all, of the targets; 4 reported that they do not plan to meet any targets; and 2 do not have a basis to report planned optimization milestones because they do not report having any agency- owned data centers. Figure 4 summarizes agencies’ progress in meeting OMB’s optimization targets as of February 2017, and planned progress to be achieved by September 2017 and September 2018, as of April 2017. Agencies’ reported plans to meet the optimization targets also vary by metric. Specifically, about half of the 22 agencies reported plans to meet the facility utilization and virtualization metrics by the end of fiscal year 2018, while less than half are planning to meet the server utilization and automated monitoring, energy metering, and power usage effectiveness metrics. Further, agencies reported that they plan to make the least amount of progress in meeting the target for power usage effectiveness. Figure 5 provides a summary, by optimization metric, of agencies’ current progress in meeting the targets as of February 2017, and planned progress to be achieved by September 2017 and September 2018, as of April 2017. The limited progress made by agencies in optimizing their data centers, combined with the lack of established plans to improve progress, makes it unclear whether agencies will be able to achieve OMB’s optimization targets by the end of fiscal year 2018. Considering that OMB is expecting at least $2.7 billion in cost savings from agencies’ optimization efforts, the ability of agencies to meet the optimization targets will be critical to meeting this savings goal. However, with less than 2 years remaining until OMB’s fiscal year 2018 DCOI optimization target deadline and the expiration of the data center consolidation and optimization provisions of FITARA in October 2018, only five agencies are planning to meet all of their applicable targets. With the majority of agencies not planning to meet the optimization targets, there is an increased likelihood that agencies will need more time beyond 2018 to continue to implement their optimization efforts. Extending the data center consolidation and optimization provision of FITARA beyond the current October 2018 horizon could provide agencies with additional time to realize the benefits of optimization, including cost savings. The 24 DCOI agencies reported successes in optimizing their data centers—notably, the benefits of key technologies, such as virtualizing systems to improve performance, and increased energy efficiency. However, agencies also reported operational, technical, and financial challenges related to, for example, improving the utilization of their data center facilities, measuring server utilization, and obtaining funding within their agency for optimization efforts. It will be important for agencies to take action to address their identified challenges—as we previously recommended—in order to improve data center optimization progress. Agencies reported a variety of successes in optimizing data centers. Specifically, the 24 agencies reported a total of 23 areas of success. Eight areas of successes were identified by three or more agencies, with the most reported successes for an area being identified by 17 agencies. The two most reported areas of success—implementing virtualization technologies and migrating IT applications and services to cloud computing solutions—were similar to the top reported success in achieving consolidation cost savings that we identified in 2014 (i.e., focusing on virtualization and cloud services as consolidation solutions). Agencies are also continuing to report successes in other areas that we highlighted in 2014, including improved energy efficiency, standardized technology, and improved data center inventory reporting. Table 3 details the reported areas of success, as well as the number of related agencies. The most common areas of success are further discussed after the table. Seventeen agencies reported that implementing virtualization technologies (i.e., running multiple, software-based machines with different operating systems on the same physical machine) has proven successful in optimizing their data centers. For example, officials from Commerce’s Office of the CIO stated that the department had made the most notable optimization progress in virtualizing all non-high performance computing servers, including approximately 11,700 operating systems (as of October 2016). Additionally, officials from Labor’s Office of the CIO noted that virtualization had helped the department create a highly efficient, lower cost, common operating environment suitable for hosting mission-critical applications and services. The officials added that the department expects to significantly increase its migration activity and closures in fiscal years 2017 and 2018 by leveraging the portability of this highly virtualized environment. As another example, officials from GSA’s IT office stated that the agency has achieved success in retiring older physical systems and shifting to newer, virtualized technologies. The officials stated that these actions have contributed to greater flexibility, stability, and redundancy in the agency’s IT capabilities. Further, officials from NRC’s Office of the CIO stated that their agency had virtualized 72 percent of its servers, which allowed the agency to significantly reduce the amount of old, outdated, and energy-inefficient equipment. Thirteen other agencies also stated that implementing virtualization technologies had led to successes in optimizing their data centers. Thirteen agencies reported that migrating IT applications and services to cloud computing solutions had led to successes in optimizing their data centers. For example, officials from HHS’s Office of the CIO stated that one of the department’s offices had realized substantial value with the use of cloud-provided solutions, including reducing the cost of data center services by approximately 15 percent compared to government and on- premises data centers. Additionally, officials from NSF’s Office of Information and Resource Management stated that the agency successfully reduced and streamlined its IT footprint through a number of different efforts, such as migration of applications, e-mail, and instant messaging to cloud providers; networking technology standardization; and server and storage consolidation. As another example, officials from USAID’s Office of the CIO stated that in 2011 the agency transformed and migrated its primary data center to a private infrastructure cloud provider, thereby eliminating physical infrastructure issues (e.g., power, heating, ventilation, air conditioning, and physical security issues). The officials added that the cloud solution provided the data center infrastructure, network access, connectivity, and other services needed to ensure the delivery of critical business services. Further, officials from Interior’s Office of the CIO stated that the department had migrated 70,000 users off of 14 legacy e-mail systems to a single department-wide cloud-based e-mail communications and collaboration system. Nine other agencies also stated that migrating to cloud computing solutions led to successes in optimizing their data centers. Their reported successes ranged from migrating e-mail applications to the cloud solutions to responding more timely to shifts in user demand. Five agencies reported that increasing their energy efficiency had led to success in optimizing their data centers. For example, officials from the Department of State’s (State) Bureau of Information Resource Management noted that the department has had success in deploying modular data centers that utilize energy-efficient power systems and other optimized operating features that help to reduce the department’s carbon footprint. Further, a program manager from the SSA’s Office of Hardware Engineering stated that the agency had improved its energy efficiency and reduced its carbon footprint through various initiatives including, among other things, rainwater reclamation, improved monitoring of IT equipment power usage, energy-efficient lighting, and the use of solar panels. Figure 6 shows the use of solar panels at the SSA’s National Support Center. Additionally, officials from EPA’s Office of Environmental Information stated that the agency had success in improving energy efficiency through the purchase of energy efficient IT equipment and by including energy metering in data center facilities planning and buildout to assist with validating energy optimization metrics. The officials also noted that the agency had increased the operating temperature in some data centers as well as used alternate methods of cooling (e.g., outdoor air to cool its data centers), which helped the agency improve its energy efficiency. Officials from Commerce and HHS further stated that increasing their energy efficiency by, for example, purchasing energy- efficient equipment and deploying power monitoring equipment, had led to successes in optimizing their data centers. Agencies also reported facing a variety of challenges in optimizing their data centers. Specifically, the 24 DCOI agencies identified a total of 27 types of challenges across three areas: operational, technical, and financial. The highest number of challenges were reported in the operational and technical areas, which included improving data center facility utilization and measuring and reporting on server utilization. Certain challenges reported were similar to those described to us by agencies in 2016, including those related to competing priorities for labor resources and closing data centers that provide mission critical applications that require proximity to users. Agencies also continued to report operational, technical, and financial challenges that were similar to those described to us in 2014, including gathering data from component agencies, determining power usage information, and obtaining funding from within their agency. For example, in 2014, six agencies noted that gathering data from component agencies was an operational challenge to achieving consolidation cost savings; however, only two agencies are now reporting that as a challenge in optimizing their data centers. Agencies also cited many new challenges that are specific to optimizing their data centers, such as incorporating enterprise-wide efficiencies when data centers are owned and managed by multiple organizations and the significant upfront costs required to purchase data center monitoring tools. Table 4 details the reported challenges in optimizing data centers, as well as the number of related agencies. The most common challenges are further discussed after the table. Agencies reported the most operational challenges in the following areas: improving data center facility utilization; competing priorities for labor resources with other agency IT efforts; shifting definitions of a data center and changes to data center optimization requirements; and incorporating enterprise-wide efficiencies when data centers are owned and managed by multiple organizations. Improving data center facility utilization: Nine agencies cited this challenge. For example, officials from the Department of Veterans Affairs’ (VA) Infrastructure Operations stated that increasing virtualization generally reduces the number of active server racks in the space and, therefore, decreases facility utilization. The officials added that, for smaller rooms that are part of a larger, agency-owned, multi-functional facility, reducing the size of the room is most often not an economical decision, as it does not lead to energy savings or reduced facility costs but, instead, moves the recurring cost of the space from IT to other functions. Officials from DHS, Interior, Labor, and NRC also reported that their increased use of virtualization has negatively impacted their ability to increase facility utilization. As another example of a challenge in improving facility utilization, officials from Commerce’s Office of the CIO stated that the National Oceanic and Atmospheric Administration’s weather field office data centers contain systems that are proprietary and connect to local weather sensing instruments or satellite communication equipment. The officials said that most of these data centers are averaging only 50 percent facility utilization and have no plans to increase, but are difficult to close because they contain systems designed specifically for the agency’s mission. Officials from Agriculture, GSA, NASA, and the Department of the Treasury (Treasury) also cited challenges in improving facility utilization. Competing priorities for labor resources with other agency IT efforts: Six agencies cited this challenge. For example, officials from Commerce’s Office of the CIO stated the Census Bureau is ramping up for a very large program—the 2020 Decennial Census—while also working to optimize its data centers. This has led to challenges in implementing data center infrastructure management tools and replacing old power distribution units with new ones. As another example, officials from EPA’s Office of Environmental Information noted that IT personnel are primarily focused on day-to-day operations and maintenance activities and, therefore, resources normally used to support data center activities are periodically pulled away to address more immediate operational activities (such as cybersecurity initiatives). In addition, SBA’s CIO stated that the biggest challenge faced by the agency is a lack of labor resources, which has historically been due to a focus on mission priorities instead of data center improvements. Officials from GSA, Labor, and Transportation also stated that completing priorities for labor resources has been a challenge to optimizing their data centers. Shifting definition of a data center and changes to data center optimization requirements: Five agencies cited this challenge. For example, officials from Interior’s Office of the CIO stated that significant changes outlined in OMB’s August 2016 memorandum and previously issued guidance related to the definitions of a data center and optimization metrics presented challenges in maintaining inventories, measuring progress, and assessing cost savings and avoidances. As another example, officials from Energy’s Office of the CIO stated that Energy’s unique computing environments, which support scientific research, facility and plant operations, power management, and mission-specific computing, makes aligning with OMB’s data center definition difficult. In addition, officials from DHS’s Office of the CIO stated that OMB’s recent changes to the data center optimization metrics, including the focus on agency-owned data centers, greatly impacted the department’s ability to report on optimization progress. More specifically, the officials stated that OMB’s prior optimization metrics focused on the department’s three core data centers (i.e., primary consolidation points); however, under OMB’s new metrics, the department’s core data centers are no longer applicable to the metrics because they are not agency-owned. The officials added that this negatively impacted the department’s ability to report optimization progress related to power usage effectiveness. Officials from GSA and NRC also cited challenges related to the change in the definition of a data center and data center optimization requirements. Incorporating enterprise-wide efficiencies for data centers owned and managed by multiple organizations: Five agencies identified this challenge. For example, officials from NASA’s Office of the CIO stated that, historically, the agency’s data centers have been owned and managed by multiple organizations, including contractors, which has made it challenging to incorporate enterprise-enabled efficiencies (i.e., common procurements, implementation of standard hardware, software, and management tools). The officials also mentioned that the extensive use of data centers collocated within multi-use buildings, with shared electrical and mechanical infrastructure, has resulted in the agency not realizing the magnitude of savings that would be attributed to the closure of stand-alone data center facilities. As another example, officials in Energy’s Office of the CIO stated that the implementation of optimization solutions in data centers that are mission and research specific, or have unique operational and environmental requirements, has presented operational challenges. In addition, officials from Justice’s Office of the CIO stated that implementing enterprise solutions across a large and traditionally federated organization has been challenging. Officials from DHS and Labor also cited challenges with improving optimization at data centers that are owned and managed by multiple organizations. Agencies reported the most technical challenges in the following areas: measuring and reporting on server utilization progress, a lack of electricity metering to determine power usage information, and poor network connectivity and low bandwidth at field locations constraining consolidation and optimization efforts. Measuring and reporting on server utilization progress: Nine agencies cited this challenge. For example, officials from VA’s Infrastructure Operations cited challenges with the complexity of programming the tools needed to collect the data to measure server utilization. In particular, the officials noted issues in delineating what data should be collected to determine server “busy” and “idle” times (e.g., computer processing unit usage, power consumption, or other data) and what unit of time to associate with the data collection (i.e., seconds, minutes, hours, etc.) in order to be able to report on the server utilization metric. As another example, officials from Justice’s Office of the CIO stated that optimizing the server utilization of department data centers that are consolidation points will be extremely difficult because the environments are going through significant changes as they receive servers from other locations. In addition, officials from Treasury’s Office of the CIO stated that, while the department’s servers have the ability to measure and monitor processing usage, most data centers do not have the ability to centrally aggregate and report on that data. Officials from Agriculture, the Department of Defense (Defense), Labor, NASA, OPM, and Treasury also cited challenges in measuring and reporting on server utilization progress. Lack of electricity metering to determine power usage information: Seven agencies identified this challenge. For example, officials from Commerce’s Office of the CIO stated that many of the department’s data centers are small and lack separate power metering. The officials added that, rather than adding power monitoring to each small data center, the department needs to conduct further research to evaluate whether consolidation of these unmetered data centers into a few larger well-maintained data centers is more cost effective. As another example, officials from Labor’s Office of the CIO stated that a vast majority of the department’s data centers are in modified office spaces that also serve other purposes, such as accommodating the storage of legacy IT assets and providing a workspace for IT support personnel, which has made the installation of power metering challenging. Further, officials from VA’s Infrastructure Operations stated that the department’s individual data centers are largely unique, thus requiring detailed engineering to determine how to retrofit energy metering solutions to provide the data necessary for energy usage optimization, particularly without incurring critical IT system downtime. The officials added that the majority of the department’s data centers are not stand-alone data centers, but rather, are rooms within a medical center facility or other multi-purpose facility that were not constructed to facilitate power metering. VA officials stated that these challenges made measuring power usage effectiveness extremely complicated, time-consuming, and costly. Agriculture, Labor, OPM, and SBA also mentioned challenges related to the lack of electricity metering to determine power usage information. Poor network connectivity and low bandwidth at field locations constrains consolidation and optimization efforts: Five agencies cited this challenge. For example, officials from Interior’s Office of the CIO stated that numerous remote field offices within the department experience poor network connectivity and low bandwidth to support running remotely-hosted applications. The officials added that the risk of reduced service levels at these remote locations is frequently cited as a constraint on consolidation and a challenge to improving optimization progress. As another example, Transportation’s Office of the CIO noted challenges with consolidating field site servers because the telecommunication bandwidth to the field sites is lacking. Officials from HHS, Labor, and SBA also cited concerns about connectivity performance issues as a challenge to consolidation and optimization of data centers at their field office locations. Agencies reported three financial challenges in the following areas: obtaining the funding within their agency for optimization efforts, the upfront costs required to purchase the monitoring tools needed to measure optimization progress, and determining the resulting cost savings and avoidances. Obtaining the funding within their agency for optimization efforts: Ten agencies cited this challenge. For example, officials from OPM’s Office of the CIO stated that while the agency’s base budget includes ongoing operations and maintenance funding for the agency’s existing data centers, the availability of financial resources during fiscal years 2017 and 2018 would be one of the most significant challenges to improving data center optimization performance, and satisfying DCOI requirements. Further, officials from Justice’s Office of the CIO stated that financial constraints may limit the funding available for migration of component infrastructure to cloud computing services or the department’s core enterprise facilities, which could delay or prevent optimization. As another example, officials from Defense’s Office of the CIO stated that resource constraints to support application and system rationalization, re-engineering, and migration, forced many component agencies to focus on physical relocations of systems, which limit data center optimization opportunities and savings. Officials from Commerce, DHS, Energy, HHS, Labor, SBA, and Transportation also cited challenges in obtaining the funding within their agency for optimization efforts. Significant upfront costs required to purchase the monitoring tools needed to measure optimization progress: Eight agencies cited this challenge. For example, officials from Treasury’s Office of the CIO stated that the department is currently in the process of evaluating how to most effectively meet data center power metering requirements without incurring significant expenditures. The officials stated that several of their larger data centers are in older, multi-use buildings and share a cooling infrastructure with the entire building. The officials added that measuring the energy consumed by the portions of the building dedicated to hosting IT equipment would require meters to be installed within just those spaces dedicated to IT, which is a significant cost that is being evaluated relative to other mission-oriented investments. As another example, officials from Interior’s Office of the CIO stated that the investment for purchasing data center optimization tools would require a reallocation of funds from the department’s fiscal years 2017 and 2018 budgets and would have an adverse effect on meeting other higher priority requirements, such as cybersecurity requirements. Further, the officials stated that purchasing and deploying energy metering tools in the department’s smaller data centers would result in a negative return on investment. In addition, officials from VA’s Infrastructure Operations stated that the department’s data centers are largely unique and require detailed engineering to determine how to retrofit metering solutions to provide data necessary for energy usage optimization, which has not yet been funded. Officials from Agriculture, Commerce, Defense, GSA, and State also cited significant upfront costs of data center monitoring tools as a challenge. Determining the resulting cost savings and avoidances from consolidation and optimization efforts: Five agencies identified this challenge. For example, officials from NASA’s Office of the CIO stated that due to their extensive use of data centers collocated within multi- use buildings, with shared electrical and mechanical infrastructure, the agency has not realized the magnitude of savings that would be attributed to the closure of stand-alone data center facilities. The officials added that, in most instances, the closed data center spaces have been locally repurposed for non-IT use. As another example, officials from Agriculture’s Office of the CIO stated that it can be difficult to determine facility costs and the resulting cost savings and avoidances. The officials noted that data centers located within government owned or leased buildings usually do not pay for electricity, heating and air conditioning expenses, or lease and facility upkeep costs, which can present challenges in calculating any cost savings and avoidances from optimization. Officials from GSA, Interior, and Treasury also cited challenges in determining the resulting cost savings and avoidances from their consolidation and optimization efforts. Addressing these optimization challenges and others—as we previously recommended in 2016—is increasingly important in light of FITARA’s requirements, which direct agencies to establish a multi-year strategic plan to improve data center optimization progress. Until agencies address these challenges, they could be hindered in the implementation of their data center optimization strategic plans and in making initiative-wide progress against OMB’s optimization targets. As noted earlier, FITARA required OMB to establish data center consolidation and optimization metrics, including a metric specific to measuring server efficiency; it also required agencies to report on progress in meeting the metrics. Pursuant to FITARA, OMB’s August 2016 memorandum required agencies to measure and report on server utilization progress, including the number of agency-owned data centers fully equipped with automated monitoring tools and their server utilization percentages. To effectively measure progress against this metric, OMB’s memorandum also directed agencies to immediately begin replacing the manual collection and reporting of systems, software, and hardware inventory housed within agency-owned data centers with automated monitoring tools and to complete this effort no later than the end of fiscal year 2018. Agencies are required to report progress in implementing automated monitoring tools and server utilization averages at each data center as part of their quarterly data center inventory reporting to OMB. Finally, standards for internal control emphasize the need for federal agencies to establish plans to help ensure goals and objectives can be met, including compliance with applicable laws and regulations. As of February 2017, 4 of the 22 agencies reporting agency-owned data centers in their inventory—NASA, NSF, SSA, and USAID—reported that they had implemented automated monitoring tools at all of their data centers. Further, 10 reported that they had implemented automated monitoring tools at between 1 and 57 percent of their centers, and 8 had not yet begun to report the implementation of these tools. In total, the 22 agencies reported that automated tools were implemented at 123 (or about 3 percent) of the 4,528 total agency-owned data centers, while the remaining 4,405 (or about 97 percent) of these data centers were not reported as having these tools implemented. Table 5 provides a listing of the number and related percentage of agency-owned data centers reported by agencies as having automated monitoring tools implemented. Of the 123 data centers reported as having automated monitoring tools implemented, 59 were identified as tiered data centers and 64 as non- tiered data centers. Figure 7 summarizes the number of agency-owned data centers reported with automated monitoring tools installed, including the number of tiered and non-tiered centers. The limited implementation of automated monitoring tools resulted in incomplete information on server utilization percentages. As noted earlier, OMB’s IT Dashboard is used to publicly report on agencies’ progress in measuring server utilization. This progress information is obtained from agencies’ quarterly data center inventory submissions, which are required to include detailed data on the server utilization averages of each tiered and non-tiered data center. Based on agencies’ February 2017 data center inventory data, 4 of the 22 agencies reported a server utilization average for all of their monitored tiered and non-tiered data centers, 10 reported server utilization averages at a portion of their centers, and 8 did not report this information. SSA reported the highest server utilization average of 100 percent at its one agency-owned tiered data center, while GSA reported the lowest percentage of 9 percent across its 31 agency-owned tiered and non-tiered centers with automated monitoring tools installed. According to our analysis of agencies’ inventory data, the average server utilization across all 123 data centers with automated monitoring tools installed was about 28 percent, which is approximately 37 percent below OMB’s fiscal year 2018 goal of 65 percent or higher. Figure 8 shows the agency-reported server utilization averages for the 4 agencies that reported this information at all their data centers and the 10 agencies that reported this information at a portion of their centers, as well as the percentage of their agency-owned data centers with automated monitoring tools installed. For the 18 agencies that did not report server utilization average information at all their data centers, none fully documented plans to implement the automated monitoring tools required to measure this information at all their agency-owned tiered and non-tiered centers by the end of fiscal year 2018. More specifically, our analysis of agencies’ DCOI strategic plans, FITARA implementation milestones, and other documentation (such as project plans and charters) showed that 6 of the 18 agencies—Agriculture, Energy, EPA, GSA, State, and VA—partially documented plans because they addressed implementing automated monitoring tools for only a portion of their data centers. However, these agencies did not address implementing such tools at all tiered and non- tiered agency-owned data centers, as required by OMB. The remaining 12 agencies did not document plans to implement automated monitoring tools. Table 5 provides an assessment of agencies’ documented plans to implement data center automated monitoring tools. The 18 agencies provided a variety of reasons regarding why they had not established a plan to implement automated monitoring tools at all agency-owned data centers. For example, officials at six agencies (Defense, DHS, EPA, GSA, Labor, and Justice) stated that they were in the process of establishing a plan to implement automated monitoring tools, but had not yet completed it. As another example, agency officials from State’s Bureau of Information Resource Management and NRC’s Office of the CIO noted that they were still evaluating options for purchasing and deploying these tools. Further, officials from OPM’s Office of the CIO and Transportation’s Office of the CIO stated that they were still determining the extent to which their data centers had automated monitoring tools installed. Lastly, officials from Commerce’s Office of the CIO stated the department had no specific plans to invest in automated monitoring tools. The lack of detailed plans to implement automated monitoring tools at all agency-owned data centers is also due, in part, to OMB not having established a formal requirement to document such plans. Although OMB’s August 2016 memorandum required agencies to submit a DCOI strategic plan by September 30, 2016, and to update it by April 14, 2017, these plans were not required to include detailed information describing how the agency was planning to meet OMB’s requirement to implement automated monitoring tools at all agency-owned tiered and non-tiered centers. Recognizing this issue, OMB staff from the Office of the Federal CIO stated that they have been advising agencies to include these more detailed plans and milestones for implementing data center automated monitoring tools as part of their publicly available FITARA implementation milestones. However, OMB has not established a formal requirement in its data center guidance or FITARA implementation guidance provided to agencies. As mentioned previously, our analysis of agency’s FITARA implementation milestones showed that most agencies were not aware of OMB’s request to include this information. Until OMB requires agencies to include detailed plans to implement automated monitoring tools in their FITARA implementation milestones, agencies may continue to lack a roadmap to meet a key DCOI goal. Further, until agencies complete their plans, they may be challenged in implementing the tools needed to effectively measure server utilization—a data center optimization area highlighted in FITARA, and that we previously reported as being critical to improving the efficiency, performance, and environmental footprint of federal data center activities. With the August 2016 launch of DCOI, OMB took a considerable step forward in providing guidance for the implementation of the data center consolidation and optimization requirements of FITARA and increasing the oversight of agencies’ efforts to optimize their data centers. OMB’s fiscal year 2018 optimization targets provide clear and transparent goals for agencies’ optimization efforts; however, agencies reported limited progress against those targets. Additionally, although agencies’ DCOI strategic plans provide a mechanism for agencies to report planned fiscal years 2017 and 2018 milestones toward achieving OMB’s optimization targets, most agencies reported that they are not planning to meet OMB’s targets by the end of fiscal year 2018. Considering that OMB established a DCOI-wide savings goal of $2.7 billion, the ability of agencies to meet the optimization targets will be critical to achieving these savings. Extending the time frame for the agencies to meet the required data center consolidation and optimization provisions of FITARA beyond October 2018 could provide agencies with additional time to achieve the benefits of optimization. In addition, agencies’ implementation of our prior recommendations to address optimization challenges and improve progress could help ensure that they are better positioned to meet key DCOI goals. As a result of OMB’s increased focus on data center optimization beginning in 2013 and its more recent efforts to launch DCOI, agencies have reported noteworthy successes in optimizing their data centers— particularly in leveraging virtualization and cloud computing as a means to optimize their data centers. These constructive experiences indicate that DCOI is moving in the right direction. However, as agencies work toward achieving OMB’s fiscal year 2018 optimization targets, many are reporting challenges related to improving data center facility utilization, measuring and reporting on server utilization progress, and obtaining the funding within their agency for optimization efforts. Such a dynamic environment reinforces the need for agencies to address their identified challenges— as we previously recommended—in order to improve data center optimization progress. OMB’s efforts to establish a metric to measure server utilization as part of its August 2016 memorandum were consistent with our 2014 recommendation and an important step toward ensuring that agency computing resources are being used more efficiently. Additionally, OMB’s requirement that agencies implement automated monitoring tools at their data centers by the end of fiscal year 2018 will help to ensure that they have the necessary foundation in place to effectively measure and report on server utilization progress. However, with agencies collectively reporting that these tools are only installed at about 3 percent of the total data centers and with 18 agencies lacking complete plans to implement these tools at their remaining data centers, significant work remains toward meeting OMB’s requirement. The lack of a formal OMB requirement to establish detailed plans in this area and report them to OMB further increases the likelihood that agencies will continue to lack them. In the absence of such a requirement and completed plans, agencies will be missing an important roadmap for implementing the automated monitoring tools needed to measure server utilization—an area that both we and OMB have reported as critical to improving the efficiency, performance, and environmental footprint of federal data center activities. Moreover, with automated monitoring tools not required by OMB to be fully implemented by agencies until the end of fiscal year 2018, extending the time frame of FITARA’s data center consolidation and optimization provisions could also better ensure that server utilization is effectively measured and reported beyond fiscal year 2018, after the necessary monitoring tools are implemented. As most agencies lack plans to meet OMB’s data center optimization targets by the end of fiscal year 2018, it is increasingly likely that these agencies will require additional time to achieve the data center consolidation and optimization goals required by FITARA and OMB guidance. In order to provide agencies with additional time to meet OMB’s data center optimization targets and achieve the related cost savings, Congress should consider extending the time frame for the data center consolidation and optimization provisions of FITARA beyond their current expiration date of October 1, 2018. To better ensure that agencies complete important DCOI planning documentation and that the initiative improves governmental efficiency and achieves intended cost savings, we are recommending that the Director of OMB direct the Federal CIO to formally document a requirement for agencies to include plans, as part of existing OMB reporting mechanisms, to implement automated monitoring tools at their agency-owned data centers. We are also recommending that the Secretaries of Agriculture, Commerce, Defense, Homeland Security, Energy, HHS, Interior, Labor, State, Transportation, Treasury, and VA; the Attorney General of the United States; the Administrators of EPA, GSA, and SBA; the Director of OPM; and the Chairman of NRC take action to, within existing OMB reporting mechanisms, complete plans describing how the agency will achieve OMB’s requirement to implement automated monitoring tools at all agency-owned data centers by the end of fiscal year 2018. We received comments on a draft of this report from OMB and the 24 agencies that we reviewed. Of the 19 agencies to which we made recommendations, 10 agencies agreed with our recommendations, 3 (Defense, Interior, and OPM) partially agreed, and 6 (including OMB) did not state whether they agreed or disagreed. In addition, 6 agencies to which we did not make recommendations stated that they had no comments. Multiple agencies also provided technical comments, which we have incorporated as appropriate. The following discusses the comments from each agency to which we made a recommendation. In an e-mail received on July 7, 2017, a staff member from OMB’s Office of General Counsel stated that the agency had no comments on the draft report. The staff member did not state whether the agency agreed or disagreed with our recommendation. In an e-mail received on June 26, 2017, a senior advisor in the Department of Agriculture’s Office of the CIO did not state whether the department agreed or disagreed with our recommendation, but noted that the department understands that automated monitoring of server utilization and virtualization is critical to accurate data center performance and cost savings reporting. In written comments, Commerce stated that it agreed with our recommendation and described actions planned to implement it. Specifically, the department noted that, as part of its effort to consolidate, define, and establish a plan to deploy an enterprise-wide automated monitoring tool, it has identified two component agencies that will offer a data center infrastructure management tool as a service. The department added that this approach will allow it to monitor and report cost savings and avoidances more efficiently. Commerce’s comments are reprinted in appendix II. In written comments, Defense stated that it partially agreed with our recommendation. Specifically, the department stated that it recognizes the value of data center infrastructure management capabilities in realizing DCOI objectives and will endeavor to implement the capabilities as quickly as possible. However, the department noted that it will be unable to complete the implementation of data center infrastructure management capabilities by the end of fiscal year 2018, as we recommended. As obstacles to meeting this deadline, the department cited procurement regulations, resource challenges, the budget cycle, and remaining work to resolve the population of installation processing nodes, but did not offer further details. Our report specifically recognizes the challenges cited by agencies in the implementation of automated monitoring tools (i.e., data center infrastructure management capabilities), and notes the importance of detailed plans to overcome these challenges. Given the department’s own acknowledgment of facing implementation obstacles, a plan describing how it will implement these important monitoring tools could help overcome the challenges identified. Therefore, we continue to believe our recommendation is warranted. Defense’s comments are reprinted in appendix III. In written comments, Energy stated that the department concurred with our recommendation and described planned actions to implement it. Specifically, the department stated that it established plans to implement automated monitoring tools at its 78 department-owned tiered data centers and plans to evaluate whether its 68 department- owned non-tiered data centers should be consolidated or closed. For the non-tiered centers slated to remain open, the department stated that it expects to complete plans describing how it will automate server utilization by September 2019. Energy’s comments are reprinted in appendix IV. In written comments, HHS stated that the department concurred with our recommendation and described planned actions to implement it. Specifically, the department stated that HHS will direct its operating and staff divisions to acquire and install automated monitoring tools in all agency-owned data centers by the close of fiscal year 2018. HHS’s comments are reprinted in appendix V. In written comments, DHS stated that the department concurred with our recommendation and described planned actions to implement it. Specifically, the department stated that it is continually reviewing optimization alternatives, including evaluating the option to move to a cloud deployment model over the next few years. The department further noted that it does not expect to achieve the optimum solution in agency-owned tiered data centers by the end of fiscal year 2018, as we recommended, but agreed with our suggestion that the DCOI time frame be reconsidered. In addition, DHS stated that it expects to have an optimization plan that includes, among other things, resource requirements and a schedule to achieve monitoring compliance for agency-owned tiered data centers by April 2018. DHS’s comments are reprinted in appendix VI. In written comments, Interior stated that the department partially concurred with our recommendation. Specifically, the department stated that it is committed to completing its plan on schedule, but that its ability to meet OMB’s requirement to implement automated monitoring tools at all department-owned data centers by the end of fiscal year 2018, as we recommended, will depend on many factors and variables, including the availability of funding and other resources. Because of the potential for improved efficiency and cost savings from data center optimization, as discussed in this report, we believe the department should devote the necessary resources to ensure that automated monitoring tools are installed at all department-owned data centers by the end of fiscal year 2018, as required by OMB. Therefore, in our view, the recommendation continues to be warranted. Interior’s comments are reprinted in appendix VII. In an e-mail received on July 13, 2017, a Justice audit liaison stated that the department concurred with our recommendation. In written comments, Labor stated that the department accepted our recommendation and will incorporate pertinent information in its next data center consolidation and optimization strategic plan due in April 2018. Labor’s comments are reprinted in appendix VIII. In written comments, State indicated that the department agreed with our recommendation and described completed and planned actions to address it. Specifically, the department stated that it performed an analysis of tools, including shared services and commercial-off-the- shelf products. The department also stated that it is developing an acquisition strategy based on its research and is recommending that a commercially available product would be the best solution to meet monitoring requirements. Further, the department noted that additional budgetary resources may be required to support an enterprise-wide roll-out of automated server monitoring across all tiered data centers, which may not be available until fiscal year 2019 or later. As discussed in detail in this report, data center optimization holds the potential for improved efficiency and cost savings. Consequently, we encourage the department to devote the necessary resources to ensure that automated monitoring tools are installed at all department- owned data centers by the end of fiscal year 2018, as required by OMB. State’s comments are reprinted in appendix IX. In an e-mail received on July 3, 2017, a deputy director in Transportation’s Audit Relations and Program Improvement office stated that the department concurred with our recommendation. In an e-mail received on July 20, 2017, an audit liaison in Treasury’s Office of the CIO stated that the department had no comments on the draft report, and did not state whether the agency agreed or disagreed with our recommendation. In written comments, VA stated that it concurred with our recommendation and noted that it is developing a plan to fully comply with OMB’s requirement to implement automated monitoring tools at all agency-owned data centers by the end of fiscal year 2018. The department added that it expects to complete this plan by November 2017. VA’s comments are reprinted in appendix X. In written comments, EPA did not state whether the agency agreed or disagreed with our recommendation, but described planned actions to implement it. Specifically, the agency detailed plans to address OMB's requirements, such as leveraging EPA's current investment in a network monitoring tool and the intent to procure and deploy a data center infrastructure management tool by the end of fiscal year 2018. However, EPA also noted that budget cuts may delay the agency's efforts to fully implement the requirements of DCOI. As noted earlier, because of the potential efficiency and savings from data center optimization, we believe EPA should devote the necessary resources to ensure that automated monitoring tools are installed at all department-owned data centers by the end of fiscal year 2018, as required by OMB. EPA's written comments are reprinted in appendix XI. In written comments, GSA stated that it agreed with our recommendation and that it plans to install automated monitoring tools by the end of fiscal year 2018. GSA’s comments are reprinted in appendix XII. In written comments, NRC stated that it was in general agreement with our findings. The agency did not state whether it agreed or disagreed with our recommendation, but described actions planned to address it. Specifically, the agency stated that it plans to install automated monitoring tools in all of its tiered data centers. The agency added that it is planning to close its non-tiered data centers. NRC’s comments are reprinted in appendix XIII. In written comments, OPM stated that the agency partially concurred with our recommendation. Specifically, the agency stated that it plans to consolidate its remaining data centers into two main locations by the end of fiscal year 2018. OPM further stated that this consolidation will obviate the need to implement automated monitoring tools at the data centers that are closing. Finally, the agency noted that it is implementing automated monitoring tools at the designated core data centers. We encourage OPM’s efforts to continue to consolidate its data centers. However, as mentioned in its comments, OPM’s automated monitoring tools have not yet been installed at the agency’s core data centers. Completing a plan describing how the agency will meet OMB’s requirement to implement automated monitoring tools at these centers, as we recommended, could better ensure that this important effort is completed. Therefore, we believe our recommendation is still warranted. OPM’s comments are reprinted in appendix XIV. In an e-mail received on July 13, 2017, a program manager in SBA’s Office of Congressional and Legislative Affairs stated that the agency had no comments on the draft report, and did not state whether the agency agreed or disagreed with our recommendation. In addition to the aforementioned comments, six agencies to which we did not make recommendations provided the following responses: In an e-mail received on June 23, 2017, a policy analyst in Education’s Office of the Secretary/Executive Secretariat stated that the department had no comments on the draft report. In written comments, HUD stated that the department had no comments on the draft report. HUD’s comments are reprinted in appendix XV. In an e-mail received on July 14, 2017, a NASA audit liaison stated that the agency had no comments on the draft report. In an e-mail received on July 17, 2017, a NSF audit liaison stated that the agency had no comments on the draft report. In written comments, SSA stated that the agency had no comments on the draft report. SSA’s comments are reprinted in appendix XVI. In an e-mail received on July 12, 2017, an audit liaison in USAID’s Bureau for Management stated that the agency had no comments on the draft report. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 22 days from the report date. At that time, we will send copies to interested congressional committees, the Director of OMB, Secretaries and agency heads of the departments and agencies addressed in this report, and other interested parties. In addition, the report will be available at no charge on GAO’s website at http://www.gao.gov. If you or your staffs have any questions on the matters discussed in this report, please contact me at (202) 512-9286 or pownerd@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix XVII. Our objectives were to (1) assess agencies’ progress against the Office of Management and Budget’s (OMB) data center optimization targets, (2) identify agencies’ notable optimization successes and challenges, and (3) evaluate the extent to which agencies are able to effectively measure server utilization. To assess agencies’ progress against OMB’s data center optimization targets, we analyzed the February 2017 data center optimization progress information of the 24 department and agencies (agencies) that participate in OMB’s Data Center Optimization Initiative (DCOI). This progress information was obtained from the Information Technology (IT) Dashboard—an OMB public website that provides information on federal agencies’ major IT investments. We then compared the agencies’ optimization progress information against OMB’s fiscal year 2018 optimization targets, as documented in its August 2016 memorandum. Although OMB’s memorandum establishes a single optimization target value for the server utilization and automated monitoring metric, the Dashboard displays agencies’ progress for tiered and non-tiered data centers separately. To report consistently with OMB’s implementation memorandum, we combined the progress information for tiered and non- tiered data centers into a single assessment in this report. We also reviewed the 24 agencies’ DCOI strategic plans, as of April 2017, to obtain information regarding their fiscal years 2017 and 2018 plans to meet or not meet OMB’s optimization targets. This documentation included agencies’ strategic plan information publicly posted on agency-owned digital strategy websites, and additional agency- provided documentation of their data center consolidation and optimization strategic plans. To assess the reliability of agencies’ optimization progress information on OMB’s IT Dashboard, we reviewed the information for errors or missing data, such as progress information that was not available for certain metrics. We also compared agencies’ optimization progress information across multiple reporting quarters to identify any inconsistencies in agencies progress. We discussed with OMB staff any discrepancies or potential errors identified to determine the causes or request additional information. In addition, we interviewed OMB officials to obtain additional information regarding the steps taken to ensure the reliability of and validate the optimization data on the Dashboard. We determined that the data were sufficiently reliable to report on agencies’ optimization progress. To assess the reliability of the DCOI strategic plans, we reviewed agencies’ documentation to identify any missing data or errors. We also compared the planned data center optimization milestones in agencies’ documentation against current optimization progress information obtained from the Dashboard. In addition, we reviewed agency chief information officer statements attesting to the completeness of their DCOI strategic plan information. Moreover, we obtained written responses from agency officials regarding the steps taken to ensure the accuracy and reliability of their strategic plan. We discussed with agency officials any discrepancies or potential errors identified during our reviews of their strategic plan to determine the causes or request additional information. As a result of these efforts, we determined that the agencies’ strategic plan information was sufficiently reliable for reporting on plans to meet or not meet OMB’s fiscal year 2018 optimization targets. To address the second objective, we reviewed the 24 agencies DCOI strategic plans to identify successes and challenges encountered by agencies in optimizing their data centers. We also interviewed cognizant officials at the 24 agencies in order to gather additional information about their data center optimization successes and challenges. We then categorized the agency-reported successes and challenges to determine the ones encountered most often. To evaluate the extent to which selected agencies are able to effectively measure server utilization, we analyzed the 24 agencies’ February 2017 data center inventory information. We reviewed the inventory information to determine the extent to which the agencies reported the implementation of automated monitoring tools at their data centers to measure server utilization, as well as the reported server utilization percentages at those centers. To determine whether agencies had established detailed plans to meet OMB’s M-16-19 requirement to implement automated monitoring tools at all agency-owned data centers by the end of fiscal year 2018, we reviewed agencies DCOI strategic plans, publicly available milestone information for implementing the December 2014 IT acquisition reform law, and other planning documentation provided by agencies (such as project charters and project plans). We reviewed this documentation to determine the extent to which agencies documented plans to implement automated monitoring tools at all their agency-owned data centers by the end of fiscal year 2018, as required by OMB. To assess the reliability of the agencies’ data center inventories, we checked for missing data and other errors, such as anomalous server utilization percentage information. We also compared agencies’ reported use of automated monitoring tools at their data centers across multiple reporting quarters to identify any inconsistencies in agencies’ progress. We discussed with agency officials any discrepancies or potential errors identified to determine the causes or request additional information. Further, we obtained written responses from agency officials regarding actions taken to ensure the reliability of their inventory data. We determined that the agencies’ data were sufficiently reliable to report on agencies’ progress in implementing automated monitoring tools to measure server utilization. We conducted this performance audit from July 2016 to August 2017 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, individuals making contributions to this report included Dave Hinchman (Assistant Director), Jon Ticehurst (Assistant Director), Chris Businsky, Rebecca Eyler, Linda Kochersberger, and Jonathan Wall.
In December 2014, FITARA was enacted and included a series of provisions related to improving the performance of data centers, including requiring OMB to establish optimization metrics and agencies to report on progress toward meeting the metrics. OMB's Federal Chief Information Officer subsequently launched DCOI to build on prior data center consolidation and optimization efforts. GAO was asked to review data center optimization. GAO's objectives were to (1) assess agencies' progress against OMB's data center optimization targets, (2) identify agencies' notable optimization successes and challenges, and (3) evaluate the extent to which agencies are able to effectively measure server utilization. To do so, GAO evaluated the 24 DCOI agencies' progress against OMB's fiscal year 2018 optimization targets, interviewed officials, and assessed agencies' efforts to implement monitoring tools for server utilization. Of the 24 agencies required to participate in the Office of Management and Budget's (OMB) Data Center Optimization Initiative (DCOI), 22 collectively reported limited progress against OMB's fiscal year 2018 performance targets. Two agencies did not have a basis to report on progress as they do not have agency-owned data centers. For OMB's five optimization targets, five agencies or less reported that they met or exceeded each of the targets (see figure). Further, as of April 2017, 17 of the 22 agencies were not planning to meet OMB's targets by September 30, 2018. This is concerning because the Federal Information Technology Acquisition Reform Act's (FITARA) data center consolidation and optimization provisions, such as those that require agencies to report on optimization progress and cost savings, expire a day later on October 1, 2018. Extending the time frame of these provisions would increase the likelihood that agencies will meet OMB's optimization targets and realize related cost savings. Additionally, until agencies improve their optimization progress, OMB's $2.7 billion initiative-wide cost savings goal may not be achievable. All 24 agencies reported successes in optimizing their data centers—notably, the benefits of key technologies, such as virtualizing systems to improve performance, and increased energy efficiency. However, agencies also reported challenges related to, for example, improving the utilization of their data center facilities and competing for labor resources. It will be important for agencies to take action to address their identified challenges—as GAO previously recommended—in order to improve data center optimization progress. Of the 24 agencies required by OMB to implement automated monitoring tools to measure server utilization by the end of fiscal year 2018, 4 reported in their data center inventories as of February 2017 that they had fully implemented such tools, 18 reported that they had not, and 2 did not have a basis to report on progress because they do not have agency-owned data centers. Collectively, agencies reported that these tools were used at about 3 percent of their centers. Although federal standards emphasize the need to establish plans to help ensure goals are met, of the 18 agencies, none fully documented plans, 6 agencies had partially documented them, and 12 did not document them. Agencies provided varied reasons for this, including that they were still evaluating available tools. In addition, the lack of a formal requirement from OMB to establish the plans also contributed to agencies not having them. Until these plans are completed, agencies may be challenged in measuring server utilization. Congress should consider extending the time frame for the data center consolidation and optimization provisions of FITARA to provide agencies with additional time to meet OMB's targets and achieve cost savings. GAO is also recommending that 18 agencies complete their plans to implement data center monitoring tools and that OMB require agencies to complete their plans and report them to OMB. Ten agencies agreed with GAO's recommendations, three agencies partially agreed, and six (including OMB) did not state whether they agreed or disagreed, as discussed in the report.
DOD has undergone four BRAC rounds since 1988 and is currently implementing its fifth round. In May 2005, the Secretary of Defense made public more than 200 recommendations that DOD estimated would generate net annual recurring savings of about $5.5 billion beginning in fiscal year 2012. Ultimately, the BRAC Commission forwarded a list of 182 recommendations for base closure or realignment to the President for approval and estimated that BRAC could save DOD annually about $4.2 billion after the recommendations had been implemented. After the BRAC Commission forwarded to the President its list of closure and realignment recommendations, the President was required to review and prepare a report approving or disapproving the BRAC Commission’s recommendations by September 23, 2005. On September 15, 2005, the President approved, and the recommendations were forwarded to Congress, which had 45 legislative days or until adjournment of Congress to enact a joint resolution disapproving of the recommendations on an all-or-none basis; otherwise, the recommendations became effective. The BRAC Commission’s recommendations were accepted in their entirety by the President and not disapproved by Congress and became effective November 9, 2005. The BRAC statute requires DOD to complete recommendations for closing or realigning bases made in the BRAC 2005 round within a 6-year time frame ending September 15, 2011, 6 years from the date the President submitted to Congress his approval of the recommendations. In making its 2005 realignment and closure proposals, DOD applied legally mandated selection criteria that included military value as the primary consideration, as well as expected costs and savings, economic impact to local communities, community support infrastructure, and environmental impact. Military value—which includes such considerations as an installation’s current and future mission capabilities, condition, ability to accommodate future needs, and cost of operations—was the primary criteria for making recommendations as mandated by BRAC law and as reported by both DOD and the Commission. Additionally, in establishing goals for the 2005 BRAC round, the Secretary of Defense, in a November 15, 2002, memorandum initiating the round, expressed his interest in (1) reducing excess infrastructure, which diverts scarce resources from overall defense capability, and producing savings; (2) transforming DOD by aligning the infrastructure with the defense strategy; and (3) fostering jointness by examining and implementing opportunities for greater jointness across DOD. The 2005 round is unlike previous BRAC rounds because of OSD’s emphasis on transformation and jointness, rather than just reducing excess infrastructure. For example, as part of the Army’s efforts to transform its forces, the Army included actions to relocate forces from Europe and Korea to domestic installations, which were part of its larger review of bases worldwide. The 2005 round also differs from previous BRAC rounds in terms of the number of closure and realignment actions. While the number of major closures and realignments is a little greater than individual previous rounds, the number of minor closures and realignments is significantly greater than those in all previous rounds combined. DOD plans to execute over 800 closure and realignment actions as part of the 2005 BRAC round, which is more than double the number of actions completed in the prior four rounds combined. The large increase in the number of minor closures and realignments is primarily attributable to the more than 500 actions involving the Army National Guard and Army Reserve, representing over 60 percent of the BRAC actions. To implement BRAC recommendations, DOD typically must incur various up-front investment costs during the 6-year implementation period in order to achieve long-term savings associated with the recommended actions. Such costs generally include, for example, one-time costs for actions such as military construction and personnel and equipment movement, as well as recurring costs for increased operation and maintenance of facilities and information systems. While savings from this investment may begin to accrue over the implementation period, additional savings typically occur annually on a longer-term basis beyond the implementation period ending in fiscal year 2011. One-time savings may include, for example, reduced costs associated with inventory reduction or elimination of planned military construction. Recurring savings may include for example, reduced sustainment costs associated with maintaining less warehouse space. Net annual recurring savings after the implementation period are calculated by subtracting the annual recurring costs from the annual recurring savings. Expected 20-year savings, also referred to as 20-year net present value savings, takes into account all one-time and recurring costs and savings incurred over the fiscal year 2006 through 2025 time period. For the BRAC 2005 round, the OSD BRAC Office—under the oversight of the Under Secretary of Defense (Acquisition, Technology and Logistics)— has monitored the services’ and defense agencies’ implementation progress, analyzed budget justifications for significant differences in cost and savings estimates, and facilitated the resolution of any challenges that may impair the successful implementation of the recommendations within the 6-year completion period. To facilitate its oversight role, OSD required the military departments and certain defense agencies to submit a detailed business plan for each of their recommendations. These business plans, which are to be updated every 6 months, include information such as a listing of all actions needed to implement each recommendation, schedules for personnel movements between installations, updated cost and savings estimates based on better and updated information, and implementation completion time frames. DOD has made progress in implementing the BRAC 2005 round but faces challenges in its ability to meet the September 15, 2011, statutory completion deadline. DOD is more than halfway through the implementation period for BRAC 2005 and has made progress thus far. However, DOD faces several challenges to completing BRAC actions at some locations on time. First, DOD expects almost half of the 800 defense locations responsible for implementing BRAC to complete their recommendations within months of the deadline, and about 230 of those locations anticipate completion within the last 2 weeks of the implementation period. Second, some of these locations, which involve the most costly and complex recommendations, have already encountered delays in their implementation schedules. Third, DOD must synchronize relocating over an estimated 123,000 personnel with the construction or renovation of facilities. Finally, delays in interdependent recommendations could have a cascading effect on the timely completion of related recommendations. OSD recently issued guidance requiring the services and defense agencies to provide status briefings to improve oversight of issues affecting timely implementation of BRAC recommendations. However, this guidance did not establish a regular briefing schedule as needed or require the services to provide information about possible mitigation measures for any BRAC recommendations at risk of not meeting the statutory deadline. DOD is more than halfway through the implementation period for BRAC 2005 and has made steady progress thus far. In June 2008, DOD reported to Congress that 59 of 800 affected locations have completed their BRAC actions associated with that location as of December 1, 2007. While much remains to be done, DOD is awarding construction contracts, and DOD officials told us that fiscal years 2008 and 2009 should be the years with the greatest number of construction contract awards. Also, officials told us that high rates of obligation for BRAC military construction funds in fiscal year 2008 indicate that the services and defense agencies are generally meeting schedules for awarding construction contracts. This was the first BRAC round in which DOD required the services and defense agencies that implement the recommendations to prepare business plans for approval by the OSD BRAC Office. These business plans provide information on actions and time frames as well as cost and savings to help guide implementation. Services and defense agencies responsible for implementing BRAC recommendations were required to obtain business plan approval before beginning implementation. Business plans are updated twice a year, represent the most current information available on each recommendation, and serve as a tool for DOD to oversee the implementation of this BRAC round but do not include analysis of the likelihood of completing the recommendation on time. DOD faces several challenges in its ability to implement this round of BRAC by the September 15, 2011, statutory completion deadline. By statute, DOD must complete the recommendations for closing or realigning bases made in the BRAC 2005 round within 6 years from the date the President submitted to Congress his approval of the BRAC Commission’s recommendations. Although DOD has made implementation progress in the last 3½ years since BRAC became effective, the department still faces a number of challenges that could affect its ability to complete all BRAC actions by the statutory deadline. As of June 2008, DOD reported to Congress that about half out of 800 defense locations that are affected by BRAC recommendations expect to complete their BRAC-related actions within the last 9 months of the statutory deadline of September 15, 2011. Further, our analysis of DOD’s data shows that about 60 percent, or about 230, of these 400 locations expect to complete their BRAC actions in September 2011—the last two weeks before the statutory deadline. OSD BRAC officials told us some locations might have reported completion dates near the end of the BRAC deadline to allow extra time, although such a practice could represent potentially inaccurate completion estimates. Still, we believe DOD’s data provide an indicator of the number of locations that have little room for delays in the BRAC completion schedule. Some of the most costly and complex BRAC recommendations that DOD has yet to fully implement have already incurred some setbacks in implementation because of several reasons, including construction problems, the requirement to study environmental impacts, and delays in making decisions about site locations, awarding contracts and acquiring land. According to our analysis, the recommendations discussed are among the most costly, and represent about 30 percent of the total estimated costs to implement this round of BRAC. Many of these recommendations are also complex in that they involve movement of a large number of personnel, large construction projects, and synchronization with other recommendations. Some of the most costly recommendations that have experienced delays are as follows: Close National Geospatial-Intelligence Agency leased locations and realign others to Fort Belvoir, Virginia. DOD officials told us that construction of the National Geospatial-Intelligence Agency’s new $1.5 billion building at Fort Belvoir is currently on schedule. However, there is minimal schedule margin, and as a result, any unmitigated disruptions can jeopardize maintaining the complex construction schedule required to move 8,500 personnel by the statutory deadline. The estimated cost to implement this recommendation is $2.4 billion according to DOD’s fiscal year 2009 budget and the estimated completion date is September 2011. Establish San Antonio Regional Medical Center and realign enlisted medical training to Fort Sam Houston, Texas. As part of this recommendation, DOD is realigning the inpatient medical function from Lackland Air Force Base to Brooke Army Medical Center at Fort Sam Houston. However, officials with the San Antonio Joint Program Office, which was established to help implement the BRAC decisions affecting San Antonio, told us that construction contract delays have left little time in the implementation schedule to meet the statutory deadline. The estimated cost to implement this recommendation is $1.7 billion, according to DOD’s fiscal year 2009 budget, and the estimated completion date is September 2011. Realign Walter Reed Army Medical Center to Bethesda National Naval Medical Center, Maryland. Tri-Care Management Activity officials told us that although the implementation schedule for meeting the deadline for this recommendation is on an accelerated track, it will still be tight to meet the 2011 deadline. These officials told us it is taking additional time to finalize the plans for building a world-class medical center facility. According to DOD’s fiscal year 2009 budget, the estimated cost to implement this recommendation is $1.6 billion, and the estimated completion date is September 2011. Realign Maneuver Training to Fort Benning, Georgia. Construction delays to provide facilities associated with the realignment of the Army’s Armor School at Fort Knox, Kentucky, with the Infantry School at Fort Benning, Georgia, to create the new Maneuver Training Center have occurred because of concerns about environmental disturbances to the habitat of the Red-Cockaded Woodpecker at Fort Benning. According to Army officials, these delays have left little, if any, time in the implementation schedules to absorb further delays. The estimated cost to implement this recommendation is $1.5 billion, according to DOD’s fiscal year 2009 budget, and the estimated completion date is August 2011. Co-locate miscellaneous OSD, defense agency, and field activity leased locations in the District of Columbia Metropolitan Area. Various delays in the process to select a permanent site for co-locating about 6,400 personnel have slipped the time frame for starting the implementation of this recommendation. The Army had originally planned to relocate these agencies and activities to Fort Belvoir’s Engineering Proving Ground, but in August 2007, announced it was considering a nearby location belonging to the U.S. General Services Administration in Springfield, Virginia. Then, in October 2007, the Army announced it was also considering other sites in Northern Virginia, finally deciding on a site in Alexandria, Virginia in September 2008. These delays, according to Army BRAC officials, have significantly compressed the time available to build new facilities and move thousands of personnel by the 2011 statutory deadline. The estimated cost to implement this recommendation is $1.2 billion according to DOD’s fiscal year 2009 budget, and the estimated completion date is September 2011. Close Fort McPherson, Georgia. The relocation of Headquarters U.S. Army Forces Command and Headquarters U.S. Army Reserve Command to Fort Bragg, North Carolina, because of the closure of Fort McPherson, has experienced delays. The construction contract for building a new facility for the commands was delayed by 3½ months while requirements for the building were being refined and may jeopardize the Army’s ability to meet the BRAC deadline. According to Forces Command officials, there will be enough time to finish construction only if the Army encounters no further significant complications during construction. The construction contract was initially to be awarded in May 2008 but was delayed until September 2008, and the schedule to fully transfer Forces Command to Fort Bragg is very tight. The estimated cost to implement this recommendation is $798 million, according to DOD’s fiscal year 2009 budget, and the estimated completion date is September 2011. Realign Fort Bragg, North Carolina. Part of this recommendation requires the relocation of the Army’s 7th Special Forces Group to Eglin Air Force Base, Florida. However, delays resulting from concerns about the noise from the Joint Strike Fighter aircraft, which will also be located at Eglin, through the implementation of another BRAC recommendation, have contributed to uncertainties on where to relocate the Special Forces Group at Eglin. As a result, obtaining the required environmental impact studies has taken longer than originally anticipated. As of December 2008, the Army had not started construction of the needed facilities to relocate over 2,200 military personnel from Fort Bragg to Eglin. Construction was originally planned to start in October 2008. According to Special Operations officials at Fort Bragg, the time frame to complete this move is extremely tight because of these delays, and they expressed to us their doubts about completing $200 million in construction on time in order to move all military personnel by the deadline. The estimated cost to implement this recommendation is $327 million, according to DOD’s fiscal year 2009 budget, and the estimated completion date is September 2011. In addition to the BRAC actions discussed, we also found other BRAC actions that have experienced delays that could jeopardize DOD’s ability to meet the statutory 2011 BRAC deadline. Although the individual recommendations are not among the most costly to implement, collectively they illustrate further challenges DOD faces as follows: Realign Army reserve components and construct new Armed Forces Reserve Centers. According to Army BRAC officials and our analysis, time frames will be tight for completing some of the BRAC recommendations involving building 125 new Armed Forces Reserve Centers. According to Army officials, land still has not yet been acquired for some of these reserve centers. Also, we have previously reported that other BRAC funding priorities caused the Army to delay the start of 20 armed forces reserve projects, compressing the amount of time available to construct the facilities and respond to any construction delays that might arise. The Army rescheduled the start of these projects that it had initially planned to begin construction in either fiscal year 2008 or 2009 to fiscal year 2010—the second to the last year of the BRAC statutory completion period. Relocate medical command headquarters. Tri-Care Management Activity officials responsible for implementing this BRAC recommendation told us they have had delays in deciding on the actual site to relocate medical command headquarters in the Washington, D.C., area. Factors for the delay include higher than expected cost estimates to renovate a possible site in Maryland and that the current occupants of this site are not expected to vacate the property until 2011, which would be too late to meet the BRAC completion deadline. Anticipating that leasing a site might be the only viable alternative, these officials told us that once a final decision on a site was made, they would be in a more informed position to state whether enough time will be available to move several thousand personnel into a leased site by the BRAC deadline. DOD will face significant challenges in synchronizing the moves of all personnel and equipment into their new locations. Specifically, DOD must synchronize the relocation of over 123,000 personnel with the construction of an estimated $23 billion in new or renovated facilities. However, delays have left little time in the planning schedule to relocate these personnel by the deadline. For example, the already tight construction schedule for the new National Geospatial-Intelligence Agency building at Fort Belvoir has created some risk for integrating construction activities with the installation of information systems and the relocation of 8,500 agency employees to the new location, according to Fort Belvoir BRAC officials. Fort Belvoir officials also described for us the very complex and detailed ongoing planning for integrating the movement of the numerous organizations affected by another BRAC recommendation that seeks to eliminate leased locations for various Army organizations and consolidate them into two buildings on Fort Belvoir. The officials are conducting a detailed review of the requirements for each organization to ensure that there is enough space for everyone and to develop a schedule to move these organizations into the facility. Complicating the development of this schedule is that many of these organizations work with highly classified, sensitive information and cannot operate outside secured space with controlled access. Other DOD initiatives outside BRAC will complicate the synchronizing of schedules for moving of people and equipment associated with BRAC. For example, the Army plans to increase the size of its active-duty force by about 65,000 over the next several years. In addition, the repositioning of forces currently stationed in Europe and the Army’s ongoing reorganization to become a more modular, brigade-based force have caused other movements and relocations that have to be integrated with the BRAC implementation schedules. The military is also planning on drawing down the level of troops in Iraq and returning some of these forces to U.S. installations. The actions required to simultaneously implement these initiatives with BRAC further complicate the integration of moving schedules for people and equipment and raise the level of risk for further schedule disruptions, which, in turn, raise the risk of BRAC recommendations missing the statutory deadline. Some BRAC locations are unable to begin renovation of buildings slated to house realigning organizations until current tenants of these buildings vacate, a situation that has delayed the beginning of implementation. For example, as we have previously reported, as part of the BRAC recommendation to close Fort Monmouth, New Jersey, personnel from the Army’s Communications-Electronics Life Cycle Management Command currently located at Fort Monmouth are relocating to Aberdeen Proving Ground, Maryland. Army officials originally planned to renovate facilities currently occupied by a training activity for some of these employees. The training activity is scheduled to relocate to Fort Lee, Virginia, through another BRAC action; however, Army officials said that the new facilities for the training activity would not be complete as originally planned, a setback that, in turn, would delay the renovation of the Aberdeen facilities for the incoming employees. The delays in construction at Fort Lee resulted in the Army having to plan to build a new facility, rather than renovate an existing facility at Aberdeen Proving Ground at an additional cost of $17 million, to avoid the risk that the facility renovations could not completed in time for the personnel to relocate into renovated facilities at Aberdeen. According to a Fort Belvoir official, two buildings at the installation will be used to house various Army organizations that are currently in leased space and will be relocating to Fort Belvoir as directed in a BRAC recommendation. However, the Army Materiel Command is still using the two buildings pending its relocation to Huntsville, Alabama, as part of another BRAC recommendation. To further complicate the situation, the Army Materiel Command is hiring employees for a new organization, to be called Army Contracting Command, which will also be housed in the two buildings eventually planned to house the Army organizations that are currently in leased space. Until Army Materiel Command and the newly hired employees of Army Contracting Command move out of these buildings, Fort Belvoir officials cannot begin renovating the building for its new tenants. However, construction delays in Huntsville have caused the Army Materiel Command to delay its move to the Huntsville area. Furthermore, Fort Belvoir officials told us that a decision has not yet been made on the location for the newly formed Army Contracting Command and that if both this new command as well as the Army Materiel Command do not vacate the two buildings in question by June 2011, it would be nearly impossible to meet the statutory deadline. Again, this example demonstrates that delays in interdependent recommendations could have a cascading effect on other recommendations being completed on time. As we concluded our fieldwork, the Deputy Under Secretary of Defense (Installations and Environment) issued a memo dated November 21, 2008, providing guidance that required the military services and defense agencies to present periodic status briefings to OSD on implementation progress “to ensure senior leadership is apprised of significant issues impacting implementation of the BRAC recommendations” by the September 15, 2011, deadline. According to this guidance, at a minimum, the briefings are to include information on projected and actual construction contract award dates and construction completion dates, as well as BRAC actions completed. The requirement to provide these briefings is applicable only to those recommendations that are expected to have a one-time cost of $100 million or greater. The first round of such briefings was conducted in the first two weeks of December 2008. We believe that OSD should be commended for taking this positive step toward enhancing its oversight of BRAC implementation. However, OSD may still not be in a position to fully assist the services in taking mitigating measures, if warranted, to better ensure all BRAC actions are completed by the statutory deadline because the guidance does not establish a regular briefing schedule or require the services to provide information about possible mitigation measures. First, the guidance does not require the briefings to be conducted on a firm schedule for the duration of the implementation period. Unlike BRAC business plans that are to be updated every 6 months, after an initial round of briefings to be conducted in December 2008, the guidance requires only periodic updates to status briefing “as necessary” and does not specify who determines when such updates are deemed necessary. However, given the large number of locations that expect to complete their BRAC actions within months or weeks of the statutory deadline and the possibility of delays where little leeway exists, OSD would benefit from early warning and consistent monitoring of implementation challenges that could put completion schedules at those locations at further risk. Second, OSD’s recent guidance does not require the services and defense agencies to provide information about steps that could be taken to mitigate the effects of the implementation challenges they identify. We have advocated the use of a risk management approach to reduce, where possible, the potential that an adverse event will occur, reducing vulnerabilities as appropriate, and putting steps in place to reduce the effects of any event that does occur. With information about mitigation strategies that the services have developed or could develop, OSD BRAC could be in a position to provide assistance and coordination that could better enable the services and defense agencies to stay on schedule. DOD’s BRAC fiscal year 2009 budget submission shows that DOD plans to spend more and save less as compared to last year’s BRAC budget submission to implement the recommendations. DOD’s 2009 estimated one-time costs to implement this BRAC round increased by $1.2 billion. Net annual recurring savings estimates decreased by almost $13 million. In addition, our calculations show that expected savings over a 20-year period ending in 2025 declined by $1.3 billion. DOD’s BRAC fiscal year 2009 budget submission shows that DOD plans to spend more to implement its BRAC recommendations compared to last year’s BRAC budget. Specifically, DOD’s cost estimates increased by $1.2 billion in DOD’s 2009 budget to a total estimated cost of $32.4 billion to implement this BRAC round. In September 2005, the BRAC Commission originally estimated the costs to be about $21 billion. The overall estimated cost increase of $1.2 billion is a cumulative cost increase because some recommendations are expected to cost less while others could cost more. Nonetheless, our analysis shows that $1.1 billion (93 percent) of the estimated $1.2 billion increase occurred in six recommendations. For example, the recommendation to realign the National Geospatial- Intelligence Agency to Fort Belvoir, Virginia, had the largest increase in estimated costs— almost $350 million. Five other recommendations make up the remaining majority of the estimated cost increase: 1) close Fort McPherson, Georgia; 2) close Fort Monmouth, New Jersey; 3) establish a regional medical center and realign medical training to Fort Sam Houston, Texas; 4) consolidate depot-level reparable procurement management; and 5) realign to establish the Combat Service Support Center at Fort Lee, Virginia. Table 1 shows the increase in cost estimates for these six recommendations comparing fiscal year 2008 budgets to fiscal year 2009 budgets. In addition, various cost categories that make up each recommendation’s estimated costs have also experienced increases and decreases when comparing DOD’s fiscal year 2008 budget to the fiscal year 2009 budget. These cost categories are one-time costs for items and activities such as construction, environmental clean-up, and operation and maintenance. Our analysis of DOD’s budget data showed the largest estimated cost increase occurred in the military construction cost category. For example, estimated construction costs increased by nearly $1.5 billion; however, this cost increase was offset by decreases in other cost categories as shown in table 2. The overall total increase of $1.2 billion does not include about $416 million to accelerate and enhance the realignment and closure of Walter Reed Army Medical Center in the District of Columbia and the movement of its operations to the renovated Bethesda Naval Medical Center, Maryland, and a new hospital at Fort Belvoir, Virginia. DOD received these funds in its fiscal year 2008 supplemental request. OSD BRAC officials told us that they intend to seek an additional $263 million to complete the Walter Reed realignment, but these funds have not yet been provided and are also not included in the overall total increase of $1.2 billion. According to OSD BRAC officials, $416 million will be reflected in the fiscal year 2010 President’s Budget, as will the additional $263 million if these funds are provided to BRAC before the 2010 budget is submitted to Congress sometime in early 2009. In addition, our analysis of the 2005 BRAC round, based on DOD’s fiscal year 2009 budget estimates, indicates that relatively few recommendations are responsible for a majority of the expected cost. Specifically, we determined that the planned implementation of 30 recommendations (or about 16 percent of the total 182 recommendations) is expected to account for about 72 percent of the expected one-time costs. (See app. II for a listing of those BRAC recommendations DOD expects to cost the most.) While estimated implementation costs have risen, overall estimated net annual recurring savings have decreased slightly by about $13 million to about $4 billion based on DOD’s approach to include savings from military personnel who transferred or shifted from one location to another but remained on the payroll. In September 2005, the BRAC Commission originally estimated annual recurring savings to be about $4.2 billion. This amount included the savings associated with military personnel eliminations. Some recommendation savings estimates have decreased while others have increased, but the cumulative effect is an overall decrease in estimated annual recurring savings. For example, the largest decrease in net annual recurring savings was about $84 million for the recommendation to establish joint bases, which decreased from about $116 million in savings in the fiscal year 2008 budget submission to $32 million in the fiscal year 2009 budget submission. Discussions with agency officials involved with implementing this recommendation indicate that the savings could decrease further in the future. In contrast, the largest increase in net annual recurring savings was about $58 million for the recommendation to establish the San Antonio Regional Medical Center and realigning enlisted medical training to Fort Sam Houston, Texas, which increased from about $91 million in savings in the fiscal year 2008 budget submission to $149 million in the fiscal year 2009 budget submission. OSD BRAC officials told us they expect 2012 to be the first year to accrue the full amount of net annual recurring savings because some recommendations are not expected to be completed until around the September 15, 2011, deadline and significant savings generally do not begin to accrue until implementation is complete. Given the cumulative increase in estimated one-time costs and decrease in estimated net annual recurring savings, the estimated savings over a 20-year period ending in 2025, based on DOD’s fiscal year 2009 budget submission, has also decreased. Our calculations show that the 20-year savings declined almost 9 percent by $1.3 billion to about $13.7 billion, compared to $15 billion that we estimated based on the fiscal year 2008 budget. In September 2005, the BRAC Commission estimated that DOD would save about $36 billion over this 20-year period—the current estimate is a reduction of about 62 percent from the BRAC Commission’s reported estimates. Further, we determined that 30 recommendations (about 16 percent of all 2005 BRAC recommendations) account for about 85 percent of the expected savings over a 20-year period. (See app. IV for a listing of those BRAC recommendations DOD expects to save the most over a 20-year period.) The decrease in 20-year savings is directly related to the growth in estimated one-time cost and to the reduction in estimated annual recurring savings. As with annual recurring savings, the 20-year savings estimate of about $13.7 billion includes the savings associated with the elimination of military personnel. We have previously reported that military personnel position eliminations are not a true source of savings since DOD intends to reassign or shift personnel to other positions without reducing military end strength associated with the corresponding BRAC recommendation. DOD disagrees with our position. In addition, our analysis shows the number of BRAC recommendations not expected to achieve net savings over a 20 year period has continued to increase since 2005. Specifically, based on the revised 20-year savings estimates, 74 recommendations are not expected to result in a positive net savings over 20 years, compared to 73 we identified in fiscal year 2008, and 30 estimated by the BRAC Commission in 2005. OSD BRAC officials told us that, although the 20-year savings estimate is less than was estimated in 2005 by the BRAC Commission, the department expects the implementation of this BRAC round to produce capabilities that will enhance defense operations and management, despite less than anticipated savings. Although DOD is almost 3½ years into the 6-year implementation period for this round of BRAC, cost estimates could potentially continue to increase, but the potential for changes in savings estimates is less clear. Cost estimates could increase because of inflation and increased demand for construction in some areas, although changing market conditions that existed at the time of our report could reverse these trends in DOD’s favor. There is less visibility into potential changes in savings estimates because some military services and defense agencies are not periodically updating their BRAC savings estimates, and OSD is not enforcing its regulation requiring them to do so. BRAC 2005 implementation costs have the potential to continue to increase because of sharp increases in the prices of fuel and in construction materials such as steel, concrete and copper during most of 2008. The one-time implementation cost estimates for BRAC 2005 rose by about $1.2 billion from fiscal years 2008 to 2009 primarily because of increases in the cost of military construction. The potential for additional cost increases is particularly important to the Army, as it is expected to incur the majority of the military construction costs related to base closures and realignments. For example, our analysis of DOD’s fiscal year 2009 BRAC budget data shows that the Army’s estimated cost of about $13 billion for BRAC military construction accounted for nearly 60 percent of the total BRAC military construction estimate of about $22.8 billion. Moreover, the factors that drove the military construction costs up in fiscal year 2007 continued to exert upward pressure on prices through the end of fiscal year 2008. According to the U.S. Army Corps of Engineers officials, the prices of steel, concrete, and copper rose considerably from 2005 to 2008 because of worldwide demand. Our analysis of producer price index data compiled by the Bureau of Labor Statistics found that the price of steel rose by about 40 percent over that period. The price of concrete rose by about 18 percent, while copper rose over 124 percent from 2005 to 2008. In addition, fuel prices rose steadily from 2007 until August 2008, when they started to drop. Another factor that could drive up construction prices is the increased demand for construction in some markets. Specifically, BRAC implementing officials expressed concern that construction costs have the potential to increase in areas already experiencing high commercial construction demands such as the National Capital Region, Washington, D.C., and San Antonio, Texas. Army Corps of Engineers officials told us they are concerned about what effect construction demand might have on bids given the sizable amount of construction to take place in a limited amount of time to meet the BRAC statutory completion time frame. Additionally, service officials at various installations expressed concern about the potential for increases in construction costs because of ongoing reconstruction because of damage caused by natural disasters such as hurricanes and flooding, coupled with the large volume of anticipated BRAC construction that could also affect bids. Further, we reported in December 2007 that the inflation rates prescribed by DOD and the Office of Management and Budget for developing BRAC budget estimates had been lower than the actual rate of construction inflation for the last several years; therefore, the use of these rates could underestimate actual construction costs. To the extent that the actual rate of inflation continues to exceed the budgeted rate as implementation proceeds, and construction material costs are higher than anticipated, U.S. Army Corps of Engineers officials have said that they would either have to redirect funding from other sources to provide for construction projects or resort to a reduction in the scope of some construction projects. Although the economy slowed down and fuel prices began to drop in mid-to-late 2008, several bids for construction contracts that had been advertised prior to these events have come in at levels higher than programmed by the U.S. Army Corps of Engineers. For example, the construction bids to build a general instruction complex associated with the BRAC recommendation to create a Maneuver Center at Fort Benning, Georgia, were $16 million over budgeted amounts. In another case, the estimate for building a defense media center is currently $65 million, while the programmed amount is $44 million—a difference of $21 million. Although bids have been above budgeted amounts for some projects, the difference has been offset to some extent by other bids that had come in under budgeted amounts for other projects. Furthermore, as a result of the increasing construction prices, higher than expected construction bids, and revisions to facility designs and scope, the Army identified a potential BRAC cost increase of approximately $2.6 billion, with military construction accounting for about $1.4 billion and various operation and maintenance costs accounting for the remaining $1.2 billion. In the summer of 2008, Army officials told us that a high-level meeting was held with Army leadership, known as the Stationing Senior Review Group, to discuss ways to resolve the potential BRAC cost increases. Subsequently, the Army’s Office of the Assistant Chief of Staff for Installation Management made clear in an August 2008 memorandum that further growth in BRAC 2005 implementation must be avoided. BRAC officials told us that the results of these discussions on potential cost increases would be reflected in the fiscal year 2010 budget submission to Congress. DOD expects the release of the fiscal year 2010 BRAC budget submission to be after the issuance of this report; thus, we are unable to comment on Army’s recent actions to contain further cost growth related to it base closures and realignments. We believe that if the escalating pressures on the cost of construction continue, DOD may have difficulty in completing planned construction projects within currently estimated amounts in the BRAC accounts. However, at the time we concluded our fieldwork in December 2008, the U.S. economy had begun to experience a slowdown. Fuel prices, for example, had dropped precipitously compared to where they had been earlier in the year. The price of copper and concrete had also begun to decline, but prices of these two commodities nonetheless remained above 2007 levels. A continued reduction in commodities prices and further downturn in the U.S. economy could work in DOD’s favor to reduce the price of future construction contracts. For the current BRAC round, the potential for savings estimates to change is unclear because some military services are not updating their savings estimates as required by DOD regulation. DOD’s Financial Management Regulation for BRAC appropriations has instructed the services and defense agencies to update estimates in their annual budget submissions since at least June 1998. Specifically, the regulation requires that budget justification books include exhibits reporting savings estimates for the BRAC 2005 round that are based on the best projection of what savings will actually accrue from approved realignments and closures. Further, the regulation states that prior year estimated savings must be updated to reflect actual savings, when available. Our prior and current work shows that some of the military services have not updated their savings estimates periodically, thereby contributing to unrealistic BRAC net savings estimates. Specifically, our analysis shows that some of the defense agencies and the Navy updated savings estimates for some of their recommendations. For example, on the one hand, officials responsible for implementing two BRAC recommendations associated with substantial expected savings—establishing naval fleet readiness centers at multiple installations across the country and realigning medical care and training in San Antonio, Texas—told us they updated their savings estimates in the fiscal year 2009 BRAC budget based on maturing implementation plans. On the other hand, BRAC implementing officials for the Army and the Air Force told us they do not plan to update their savings estimates and will continue to report the same savings estimates reported to Congress in February 2007 despite any revisions in implementation details or completion schedules that could cause savings estimates to change. Army and Air Force officials told us that, since the savings reported to Congress had already been “taken” from their budgets, there was no incentive to update those estimates. Thus, Army and Air Force officials told us that they do not plan to update savings estimates for the remainder of BRAC implementation, despite the requirement in DOD’s Financial Management Regulation to do so. However, outdated savings estimates undermine the ability of Congress to monitor savings, a key indicator of success in BRAC implementation. The issue of updating BRAC savings estimates is not new. We have previously reported that the military services, despite DOD guidance directing them to update savings estimates (for prior BRAC rounds) in their annual budget submissions, had not periodically updated these estimates, thereby contributing to imprecision and a lack of transparency in overall BRAC estimated net savings figures. Service officials have acknowledged that updating savings has not been a high priority and that instead, they have focused their resources on developing cost estimates for the annual budget submission. However, OSD BRAC and OSD Comptroller officials told us that they believe savings estimates should be updated based on evolving implementation plans. In addition, our analysis of DOD’s fiscal year 2009 budget estimates for the 2005 BRAC round indicates that a majority of the expected savings are related to the implementation of a small percentage of recommendations. Specifically, we determined that the planned implementation of 24 recommendations (about 13 percent of all 2005 BRAC recommendations) accounts for about 80 percent, or nearly $3.2 billion, of the estimated net annual recurring savings. (See fig. 1.) A list of these recommendations can be found in appendix III. Since DOD promoted the latest round of BRAC partly on the premise that it would save money, we believe that imprecise savings estimates could diminish public trust in the BRAC process. Furthermore, without updated BRAC savings estimates, as required in DOD’s own Financial Management Regulation, DOD decision makers and Congress may be left with an unrealistic sense of the savings this complex and costly BRAC round may actually produce, a situation that could be used to justify another round of BRAC in the future. Given the exceptional size, complexity, and cost of the 2005 BRAC round, the challenges to successfully implementing recommendations at over 800 locations—while simultaneously undergoing extensive force structure transformations—within the congressionally mandated 6-year implementation period are similarly unprecedented. Complete and timely information about the obstacles the services and defense agencies are facing and any possible mitigation measures for those recommendations that are at risk could enhance the management and oversight ability of the OSD BRAC office. Although OSD has recently asked the services and defense agencies to inform it of significant issues affecting implementation of BRAC recommendations by the statutory deadline, its November 2008 guidance does not specify a further schedule for briefings. Given the tight time frames for completing some recommendations and the complexity of the challenges some recommendations face, OSD may not have enough advance warning to effectively help the services and defense agencies overcome challenges that could threaten their ability to complete some of the hundreds of actions planned to take place within weeks of the congressionally mandated BRAC deadline. Furthermore, if the services and defense agencies provided OSD with information about possible measures that could be taken to mitigate those challenges on a regular and known schedule, OSD could more effectively reallocate resources, realign priorities, and coordinate joint solutions as warranted. Anticipated savings was an important consideration in justifying the need for the 2005 BRAC round. Before DOD can realize substantial savings from this large and complex BRAC round that it could redirect to other priorities, the department must first invest billions of dollars in facility construction, renovation, and other up-front expenses. As the cost of implementing BRAC 2005 recommendations increases, it is important for decision makers to maintain clear visibility over the evolving potential for savings as a result of the BRAC process. Updated savings estimates will add specificity to DOD’s assessment of how much money will become available for other purposes and help avoid unnecessary appropriations from Congress. Moreover, without more precise savings estimates through the end of the current round’s implementation period, Congress and DOD will lack an important perspective about BRAC results that could inform decisions about any future BRAC rounds. In addition, more precise estimates are important to preserving public confidence in the BRAC program. Finally, the periodic updating of savings estimates is a good financial management practice that could strengthen DOD’s budgeting process by helping to ensure that the department relies on realistic assumptions in formulating its budgets. To enhance OSD’s role in overseeing the implementation of BRAC 2005 recommendations and managing challenges that could impact DOD’s ability to achieve full BRAC implementation by the statutory deadline, we recommend that the Secretary of Defense direct the Under Secretary of Defense (Acquisition, Technology and Logistics) to modify the recently issued guidance on the status of BRAC implementation to establish a briefing schedule with briefings as frequently as OSD deems necessary to manage the risk that a particular recommendation may not meet the statutory deadline, but as a minimum, at 6-month intervals, through the rest of the BRAC 2005 implementation period, a schedule that would enable DOD to continually assess and respond to the challenges identified by the services and defense agencies that could preclude recommendation completion by September 15, 2011, and require the services and defense agencies to provide information on possible mitigation measures to reduce the effects of those challenges. To ensure that BRAC savings estimates are based on the best projection of what savings will actually accrue from approved realignments and closures, we recommend that the Secretary of Defense direct the Under Secretary of Defense (Acquisition, Technology and Logistics); the Under Secretary of Defense (Comptroller); and the military service secretaries to take steps to improve compliance with DOD’s regulation requiring updated BRAC savings estimates. In written comments on a draft of our report, DOD concurred with all three of our recommendations. DOD noted that BRAC business managers have and will continue to provide briefings on the status of implementation actions associated with recommendations exceeding $100 million, and that these briefings provide a forum for BRAC business managers to explain their actions to mitigate challenges. In addition, DOD agreed that updating savings estimates on a regular basis is essential. The department stated that it is emphasizing savings updates during its briefings and in all future business plan approval documentation. DOD’s written comments are reprinted in appendix V. DOD also provided technical comments, which we have incorporated into this report as appropriate. We are sending copies of this report to interested congressional committees; the Secretary of Defense; the secretaries of the Army, Navy, and Air Force; Commandant of the Marine Corps; and the Director, Office of Management and Budget. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact me on (202) 512-4523 or by e-mail at leporeb@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs are on the last page of this report. GAO staff who made major contributions to this report are listed in appendix VI. We reviewed the Defense Base Closure and Realignment Commission’s 182 recommendations to realign and close military bases as presented in its September 2005 report to the President. We reviewed relevant documentation and interviewed officials in the Office of the Deputy Under Secretary of Defense (Installations and Environment) responsible for overseeing BRAC implementation and associated BRAC implementation offices in the Army, the Navy, and the Air Force. Given the unprecedented number of BRAC 2005 closures and realignments, we generally focused our analysis on those recommendations that DOD either expects to cost the most or save the most. To assess the challenges DOD faces that might affect the implementation of the BRAC recommendations by the statutory completion deadline of September 15, 2011, we reviewed relevant documentation including BRAC business plans, DOD presentations on BRAC implementation status, and prior GAO reports. We also interviewed officials in the Office of the Deputy Under Secretary of Defense (Installations and Environment) and associated BRAC offices, commands, and defense agencies that were implementing some of the complex or most costly BRAC realignments or closures to obtain the perspective of officials directly involved in BRAC implementation planning and execution. We also selected some of these installations or commands because they were responsible for implementing recommendations with a significant number of actions such as the completion of construction and movement of personnel expected to occur near the statutory deadline. At these locations, we discussed the specific challenges associated with implementing BRAC recommendations. In addition, we used DOD’s annual report to Congress to identify estimated completion dates. Finally, we reviewed OSD’s November 21, 2008, memo to the services and defense agencies responsible for implementing BRAC recommendations and assessed OSD’s requirements for briefings on the status of BRAC implementation. To assess changes in DOD’s reported cost and saving estimates since the fiscal year 2008 budget submission, we compared the fiscal year 2009 BRAC budget submission to the fiscal year 2008 budget submission. We used DOD’s BRAC budget submissions because these documents are the most authoritative information that is publicly available for comparing BRAC cost and savings estimates and because these submissions are the basis on which DOD seeks appropriations from the Congress. We then calculated dollar-amount differences for cost estimates and noted those recommendations that have increased the most in expected costs. To assess changes in DOD’s estimate of net annual recurring savings, we used OSD’s data provided to us for estimated savings in fiscal year 2012—the year after OSD expects all recommendations to be completed—because this data more fully captured these savings. We used OSD’s data for fiscal year 2008 and fiscal year 2009 to make comparisons. In addition, to determine expected 20-year savings—also known as the 20-year net present value—we used the same formulas and assumptions as DOD and the BRAC Commission used to calculate these savings. Specifically, we used DOD’s BRAC fiscal year 2009 budget data for expected costs and savings to implement each recommendation for fiscal years 2006 through 2011. We also used data that the OSD BRAC office provided us for expected net annual recurring savings after the completion of each recommendation for fiscal years 2012 to 2025. We then converted these data to fiscal year constant 2005 dollars using DOD price indexes to distinguish real changes from changes because of inflation. We used fiscal year 2005 dollars to calculate 20-year savings because the BRAC Commission also used fiscal year 2005 dollars for this calculation. Applying the same formulas and assumptions as used by the BRAC Commission, we used a 2.8 percent discount rate to calculate the accumulated net present value of expected 20-year savings. To assess the reliability of DOD’s BRAC cost and savings data, we tested computer-generated data for errors, reviewed relevant documentation, and discussed data quality control procedures with OSD BRAC officials. We determined that the data were sufficiently reliable for the purposes of making cost and savings comparisons for BRAC recommendations. We generally reported these estimated cost and savings in current dollars and not constant dollars except where noted. Finally, to evaluate the potential for BRAC cost and savings estimates to continue to change as the department proceeds with BRAC implementation, we interviewed officials from the Office of the Deputy Under Secretary of Defense (Installations and Environment), who are responsible for overseeing the implementation of BRAC recommendations and from associated BRAC implementation offices in the Army, Navy, and Air Force to discuss plans and procedures for updating these estimates. We also discussed plans and procedures for updating estimates with the Office of the Under Secretary of Defense (Comptroller). In addition, we discussed BRAC construction cost estimates with the U.S. Army Corps of Engineers because of its major role in planning and executing military construction projects. Further, we discussed cost and savings assumptions with officials from the military services responsible for implementing certain recommendations to better understand the potential for changes to cost and savings estimates. To obtain the perspective of installation and command officials directly involved in BRAC implementation planning and execution, we visited 12 installations, commands, or defense agencies affected by BRAC. We selected these installations and commands because they were among the closures or realignments that DOD projected to have significant costs or savings and to obtain a command-level perspective about BRAC implementation. Installations, commands, and defense agencies we visited are Army Forces Command, Fort McPherson, Georgia; Army Special Operations Command, Fort Bragg, North Carolina; Army Installation Management Command regions at Fort McPherson, Georgia; Fort Monroe, Virginia; and Fort Sam Houston, Texas; Army Training and Doctrine Command, Fort Monroe, Virginia; Garrison, Fort Belvoir, Virginia; Garrison, Fort Bliss, Texas; Garrison, Fort Sam Houston, Texas; Air Force’s Air Education and Training Command, Randolph Air Force Tri-Care Management Activity, Falls Church, Virginia; National Geospatial-Intelligence Agency, Fort Belvoir, Virginia; Naval Air Systems Command, Arlington, Virginia; and U.S. Army Corps of Engineers, Washington, D.C. Overall, we determined that the data for this report were sufficiently reliable for comparing cost and savings estimates and identifying broad implementation challenges. We conducted this performance audit from February 2008 to December 2008 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II lists BRAC recommendations that the DOD expects to cost the most to implement based on its fiscal year 2009 budget submission to Congress. DOD expects 30 recommendations (16 percent of the 182 recommendations) to generate about 72 percent of the one-time costs to implement BRAC recommendations during fiscal years 2006 through September 15, 2011, as shown in table 3. Appendix III lists individual BRAC recommendations that DOD expects to save the most annually after it has implemented the recommendations based on its fiscal year 2009 budget submission. DOD expects 24 recommendations (13 percent of the 182 recommendations) to generate more than 80 percent of the net annual recurring savings as shown in table 4. Appendix IV lists individual BRAC recommendations that DOD expects to save the most over a 20-year period. DOD expects 30 recommendations (16 percent) to generate more than 85 percent of the 20-year savings as shown in table 5. In addition to the individual named above, Laura Talbott, Assistant Director; Vijay Barnabas; John Beauchamp; Susan Ditto; Gregory Marchand; Richard Meeks; and Charles Perdue made key contributions to this report. Military Base Realignments and Closures: Army Is Developing Plans to Transfer Functions from Fort Monmouth, New Jersey, to Aberdeen Proving Ground, Maryland, but Challenges Remain. GAO-08-1010R. Washington, D.C.: August 13, 2008. Defense Infrastructure: High-Level Leadership Needed to Help Communities Address Challenges Caused by DOD-Related Growth. GAO-08-665. Washington, D.C.: June 17, 2008. Defense Infrastructure: DOD Funding for Infrastructure and Road Improvements Surrounding Growth Installations. GAO-08-602R. Washington, D.C.: April 1, 2008. Defense Infrastructure: Army and Marine Corps Grow the Force Construction Projects Generally Support the Initiative. GAO-08-375. Washington, D.C.: March 6, 2008. Military Base Realignments and Closures: Higher Costs and Lower Savings Projected for Implementing Two Key Supply-Related BRAC Recommendations. GAO-08-315. Washington, D.C.: March 5, 2008. Defense Infrastructure: Realignment of Air Force Special Operations Command Units to Cannon Air Force Base, New Mexico. GAO-08-244R. Washington, D.C.: January 18, 2008. Military Base Realignments and Closures: Estimated Costs Have Increased and Estimated Savings Have Decreased. GAO-08-341T. Washington, D.C.: December 12, 2007. Military Base Realignments and Closures: Cost Estimates Have Increased and Are Likely to Continue to Evolve. GAO-08-159. Washington, D.C.: December 11, 2007. Military Base Realignments and Closures: Impact of Terminating, Relocating, or Outsourcing the Services of the Armed Forces Institute of Pathology. GAO-08-20. Washington, D.C.: November 9, 2007. Military Base Realignments and Closures: Transfer of Supply, Storage, and Distribution Functions from Military Services to Defense Logistics Agency. GAO-08-121R. Washington, D.C.: October 26, 2007. Defense Infrastructure: Challenges Increase Risks for Providing Timely Infrastructure Support for Army Installations Expecting Substantial Personnel Growth. GAO-07-1007. Washington, D.C.: September 13, 2007. Military Base Realignments and Closures: Plan Needed to Monitor Challenges for Completing More Than 100 Armed Forces Reserve Centers. GAO-07-1040. Washington, D.C.: September 13, 2007. Military Base Realignments and Closures: Observations Related to the 2005 Round. GAO-07-1203R. Washington, D.C.: September 6, 2007. Military Base Closures: Projected Savings from Fleet Readiness Centers Are Likely Overstated and Actions Needed to Track Actual Savings and Overcome Certain Challenges. GAO-07-304. Washington, D.C.: June 29, 2007. Military Base Closures: Management Strategy Needed to Mitigate Challenges and Improve Communication to Help Ensure Timely Implementation of Air National Guard Recommendations. GAO-07-641. Washington, D.C.: May 16, 2007. Military Base Closures: Opportunities Exist to Improve Environmental Cleanup Cost Reporting and to Expedite Transfer of Unneeded Property. GAO-07-166. Washington, D.C.: January 30, 2007. Military Bases: Observations on DOD’s 2005 Base Realignment and Closure Selection Process and Recommendations. GAO-05-905. Washington, D.C.: July 18, 2005. Military Bases: Analysis of DOD’s 2005 Selection Process and Recommendations for Base Closures and Realignments. GAO-05-785. Washington, D.C.: July 1, 2005.
The 2005 Base Realignment and Closure (BRAC) round is the biggest, most complex, and costliest BRAC round ever. In addition to base closures, many recommendations involve realignments, such as returning forces to the United States from bases overseas and creating joint bases. However, anticipated savings remained an important consideration in justifying the need for the 2005 BRAC round. The House report on the National Defense Authorization Act for Fiscal Year 2008 directed GAO to monitor BRAC implementation. Therefore, GAO assessed (1) challenges that might affect timely completion of recommendations, (2) any changes in DOD's reported cost and savings estimates since fiscal year 2008, and (3) the potential for estimates to continue to change. To address these objectives, GAO reviewed documentation and interviewed officials in the Office of the Secretary of Defense (OSD), the services' BRAC offices, and the Army Corps of Engineers; visited installations implementing some of the more costly realignments or closures; and analyzed BRAC budget data for fiscal years 2008 and 2009. DOD has made progress in implementing the BRAC 2005 round but faces challenges in its ability to meet the September 15, 2011, statutory completion deadline. DOD expects almost half of the 800 defense locations implementing BRAC recommendations to complete their actions in 2011; however, about 230 of these almost 400 locations anticipate completion within the last 2 weeks of the deadline. Further, some of these locations involve some of the most costly and complex BRAC recommendations, which have already incurred some delays and thus have little leeway to meet the 2011 completion date if any further delays occur. Also, DOD must synchronize relocating about 123,000 personnel with an estimated $23 billion in facilities that are still being constructed or renovated, but some delays have left little time in DOD's plans to relocate these personnel by the deadline. Finally, delays in interdependent recommendations could have a cascading effect on other recommendations being completed on time. OSD recently issued guidance requiring the services and defense agencies to provide status briefings to improve oversight of issues affecting timely implementation of BRAC recommendations. However, this guidance did not establish a regular briefing schedule or require the services to provide information about possible mitigation measures for any BRAC recommendations at risk of not meeting the statutory deadline. DOD's fiscal year 2009 BRAC budget submission shows that DOD plans to spend more to implement recommendations and save slightly less compared to the 2008 BRAC budget. DOD's 2009 estimate of one-time costs to implement this BRAC round increased by $1.2 billion to about $32.4 billion. Net annual recurring savings estimates decreased by almost $13 million to about $4 billion. Also, GAO's calculations of net present value, which includes both expected cost and savings over a 20-year period ending in 2025 and takes into account the time value of money, show that implementing the 2005 BRAC recommendations is expected to save $13.7 billion. This compares to an estimated $15 billion in net present value savings based on last year's BRAC budget and the BRAC Commission's reported estimate of about $36 billion. Although DOD is about 3? years into the 6-year implementation period, the potential remains for BRAC cost estimates to continue to increase, but the potential for changes in savings estimates is unclear. Greater than expected inflation and increased market demands for construction materials could cause estimated construction costs to increase, although the extent of this increase is uncertain given today's economic market conditions. However, the potential for changes in savings estimates is unclear because BRAC headquarters officials at both the Army and the Air Force told us they do not plan to update their savings estimates regardless of factors that may cause those estimates to change, and OSD is not enforcing its own regulation requiring them to do so. Hence, congressional and defense decision makers could be left with an unrealistic sense of the savings this complex and costly BRAC round may actually produce, an issue that could be important in considering whether another round of BRAC may be warranted.
The Joint Forces Command, in coordination with the Joint Staff, the services, and other combatant commands and DOD agencies, is responsible for creating and exploring new joint war-fighting concepts, as well as for planning, designing, conducting, and assessing a program of joint experimentation. The Command executed its second large-scale field experiment, Millennium Challenge 2002, this year, and it plans another one in 2004 and others every third year thereafter. These experiments are intended to examine how well the concepts previously explored by the Command in smaller venues will work when applied with the emerging concepts being developed by the services and other combatant commands. For example, Millennium Challenge 2002 tested how well U.S. forces fared against a regional power with a sizable conventional military force and so called “anti-access” capabilities—which can include advanced surface-to-air missiles, antiship missiles and mines, and chemical and biological weapons—and validated the results of earlier experiments to develop the Command’s “rapid decisive” operations concept. The aim of the experiment was to come up with changes that can be made during the current decade. (App. I provides a chronology of major events important to joint experimentation.) Over the next several years, the Command’s experimentation will focus primarily on two concepts: one to develop a standing joint force headquarters to improve joint command and control; another to conduct more effective joint operations through “rapid decisive” operations. In November 2001, the Chairman of the Joint Chiefs of Staff directed that the Command make development of the prototype headquarters its highest near-term priority. Additionally, the Command will develop a number of other concepts aimed at specialized issues or operational problems to support the two primary concepts. Joint experimentation is a continuous process that begins with the development of new operational and organizational concepts that have the potential to improve significantly joint operations (see fig. 1). The Joint Forces Command identifies new joint concepts including those developed by other DOD organizations (such as the Joint Staff, services, and combatant commands) and the private sector and tests them in experiments that range from simple (workshops, seminars, war games, and simulations) to complex (large-scale virtual simulations and “live” field experiments). Appendix II provides additional information on joint experimentation program activities. After analyzing experimentation data, the Command prepares and submits recommendations to the Joint Requirements Oversight Council for review and, ultimately, to the Chairman of the Joint Chiefs of Staff for approval.Before submitting them to the Council, however, the Command submits its recommendations to the Joint Staff for preliminary review and coordination. The recommendations are distributed for review and comment to the Joint Staff directorates, the military services, the combatant commands, and other DOD and federal government organizations. The Council then reviews the recommendations and advises the Chairman of the Joint Chiefs of Staff on whether they should be approved. The changes, if approved, provide the basis for pursuing the capabilities needed to implement a specific operational concept. The Council is also responsible for overseeing the implementation of the recommendations, but it can designate an executive agent, such as the Joint Forces Command, to do so. The Council (or its designated executive agent) is responsible for obtaining the resources needed to implement the recommendations through DOD’s Planning, Programming, and Budgeting System. The Council also assists the Chairman, in coordination with the combatant commands, the services, and other DOD organizations, to identify and assess joint requirements and priorities for current and future military capabilities. The Council considers requirements (and any proposed changes) for joint capabilities across doctrine, organizations, training, materiel, leadership and education, personnel, and facilities. The Department of the Navy’s budget provides funding to the Joint Forces Command for joint experimentation and other Command missions. In fiscal year 2002, the Command received from the Navy about $103 million for its joint concept development and experimentation program, and it planned to spend about half of this amount for Millennium Challenge 2002. The Command has requested that the Navy provide about $98 million for the program in fiscal year 2003. The Command also provides some funds to the services, the combatant commands, and other DOD organizations for efforts that support its program activities. However, the services fund the operations and support costs of forces participating in joint experimentation. Also, the individual experimentation efforts of the services and the combatant commands are funded from within their own budgets. Since it first began joint experimentation, the Joint Forces Command has broadened and deepened the inclusion of other DOD organizations, federal agencies and departments, the private sector, and allies and coalition partners in its process for capturing and identifying new joint ideas and innovations. Organizations participating in joint experimentation are generally satisfied with current opportunities for their ideas to be considered, and many have increased their participation in the program. However, the participation of different stakeholders—the extent of which is determined by the stakeholder—varies considerably and some would like more visits and contacts with the Command. The Command is planning initiatives to increase stakeholder participation in the future, particularly for federal agencies and departments and key allies, but this increase will depend on agency-resource and national-security considerations. As the program gradually evolved, the Joint Forces Command solidified a process to involve the military services, the combatant commands, and other DOD organizations in the planning and execution of its joint experimentation activities. Because future joint operations will involve diplomatic, information, and economic actions, as well as military operations, many DOD, federal, and private organizations and governments participate and provide input into the joint experimentation program (see table 1). The Joint Forces Command functions as a facilitator to solicit and coordinate the involvement of these organizations and incorporate their input, as appropriate, into concept development and experimentation activities. Because the stakeholders determine the extent of their participation in the program, it can vary considerably. However, Joint Forces Command officials stated that participation by the services, the combatant commands, and other DOD organizations has grown steadily since the program was created and continues to grow, as participants become increasingly aware of the strong emphasis that DOD leaders are placing on experimentation. For example, in contrast to the first field experiment in 2000, which had limited involvement by the services, this year’s Millennium Challenge has seen the services more actively involved in early planning, and their individual experiments better coordinated and integrated into the field experiment. Our comparison of participation in the Command’s major field experiment in 2000 with plans for this year’s experiment found a significant increase in the diversity and number of participating organizations and in the number of concepts and initiatives proposed by these organizations. For example, the total number of organizations participating in Millennium Challenge 2002 more than doubled from the prior experiment in 2000 (from 12 to 29 organizations), and the total number of service initiatives increased from 4 to 29. The Command provides several ways for organizations to participate and provide inputs: they can review program plans and strategies; attend meetings, seminars, and workshops; take part in experimentation activities; and use various communication tools such as E-mail, Internet, and video conferencing. Additionally, the Command obtains input from the various experimentation and research and development organizations of the military services and of some combatant commands and DOD organizations. The Command also considers the results of Advanced Concept Technology Demonstrations efforts, innovations, and recent military operations in developing its program. For example, as a result of its operational experiences in Kosovo, the U.S. European Command identified various joint capability shortfalls in its recent list of Command priorities as a means of guiding the Joint Forces Command in selecting focal areas and activities for experimentation. Further, the Command is taking steps to (1) align its experimentation activities with the schedules of major service and combatant command exercises and (2) adjust its program to allow for earlier consideration of new concepts proposed by the services and the combatant commands in the input process. These adjustments would improve synchronization of experiments with the availability of forces and the training schedules of the services and the combatant commands, allow for greater involvement of these entities in the process, and increase the likelihood that joint requirements are sufficiently considered early in the development of concepts. Participating organizations also provide input during the annual preparation of two key joint experimentation-program documents: the Chairman of the Joint Chiefs of Staff’s guidance on joint experimentation and the Joint Forces Command’s Joint Concept Development and Experimentation Campaign Plan (see fig. 2). Each year the Chairman provides guidance to the Joint Forces Command to use in developing its Campaign Plan for joint concept development and experimentation. The basis for the Chairman’s guidance is derived from several sources, including strategy and planning documents, studies, and other assessments. Additionally, key DOD stakeholders, including the Chairman’s Joint Warfighting Capability Assessment teams and the Joint Requirements Oversight Council, provide input to the Joint Staff to use in developing the Chairman’s guidance. The Joint Forces Command uses this guidance, with additional input from DOD stakeholders, in preparing its Campaign Plan, which is the primary vehicle for synchronizing its joint experimentation activities and coordinating resources. The Command also solicits and considers input for the Campaign Plan from some other federal agencies and departments, academia, private sector, and allies. After review and endorsement by the combatant commands, the services, and the Joint Requirements Oversight Council, the Chairman approves the Campaign Plan. Officials at the military services, the combatant commands, and other DOD organizations we talked with said they were generally satisfied with the opportunities for input provided by the Joint Forces Command. At the same time, DOD stakeholders have taken various actions to increase their participation. Some, however, would like more contacts and communication with the Command. The Command is responding with some initiatives. Each service, the Joint Staff, the U.S. Special Operations Command, the U.S. Space Command, as well as some DOD and federal agencies (such as the National Imagery and Mapping Agency and the National Security Agency) have assigned liaison officers at the Joint Forces Command.However, officials at the Central, Pacific, and Southern Commands stated that their staffing levels currently do not allow them to devote personnel in this role. Combatant command officials indicated that the frequency and number of meetings, conferences, and other events held at the Joint Forces Command often make it difficult for their organizations to attend. The officials believe that as a result, the views and positions of their organizations are not always fully captured in some discussions and deliberations. Some of the combatant commands have or are planning to establish their own joint experimentation offices. Officials from the Pacific and Special Operations Commands stated that although their respective joint experimentation offices are largely focused on supporting their own experimentation efforts, the offices provide a cadre of staff who can better coordinate and participate more consistently in the Joint Forces Command’s joint experimentation program. For example, Pacific Command officials said that their own experimentation efforts to improve the command of joint operations over the past few years have contributed to joint experimentation by providing significant insights for the Joint Forces Command’s development of the standing joint-force headquarters concept. Central Command and Southern Command officials said their Commands have plans to establish similar offices soon. While satisfied with their participation and their ability to provide input into the program, officials at some combatant commands believe that a number of things could be done to improve the program, assuming resources are available. They believe that the Joint Forces Command could increase its visits to and participation in combatant-command activities. Some of the officials also believe that if the Joint Forces Command assigned liaison officers to their commands, the Command would obtain first-hand knowledge and a better appreciation of the various commands’ individual requirements. These officials believe that such a presence at their commands would demonstrate the Joint Forces Command’s commitment to joint experimentation and would allow for interaction with staff throughout their commands. The Joint Forces Command does not favor doing this because of the cost and the difficulty in providing the staff necessary to fulfill this role. Officials at the Pacific, Central, and Southern Commands also believe that some level of funding should be provided to the combatant commands for their use in supporting individual command and the Joint Forces Command experimentation efforts. Combatant command officials stated that currently, funds from other command activities must be diverted to support these efforts. Out of concern about the need to improve communications and participation in joint experimentation planning, the Joint Forces Command is planning some initiatives such as the following: It plans to create a virtual planning-center site for joint experimentation on its Intranet to provide DOD stakeholders with easily accessible weekly updates to information on planned experiments; participants; goals and objectives; and ongoing experimentation by the Joint Forces Command, the services, the combatant commands, and DOD agencies. It plans to develop the requirements for the site during fall 2002 and to initiate the project soon after. It established Project Alpha—a “think-tank” group—in early 2002 to provide another source of input and outputs. The project will interface with researchers throughout DOD, Department of Energy national laboratories, private industry, and academia to find cutting-edge technologies for inclusion in service and joint experimentation. This relationship will provide an opportunity for the Joint Forces Command to leverage the work of these organizations and similarly, for these organizations to gain a better understanding of and include their work in the joint experimentation program. As the joint experimentation program matured, participation by non-DOD federal agencies and departments gradually increased. Participation, however, depends upon the agencies’ desire to be involved and their available resources. Lack of involvement could lead to missed opportunities. And participation by allies and coalition partners has been limited by security concerns. The Joint Forces Command’s input process allows individual federal agencies and departments, such as the Departments of State and Justice, to participate in joint experimentation events as they choose. Interagency participation is improving, according to Command officials. For example, federal agencies and departments are participating in Millennium Challenge 2002 to assist the Command in developing its standing joint- force headquarters concept. However, resource and staffing constraints prevent some agencies and departments from taking part in experiments. For example, according to a Joint Forces Command official, the Department of Transportation and the Central Intelligence Agency decided not to send representatives to Millennium Challenge 2002 because of staffing constraints. Not only could non-DOD agencies provide important insights and contributions to joint operations, but also some important opportunities could be missed if these agencies do not consistently participate in joint experimentation events. While federal agencies and departments are beginning to increase their role in joint experimentation, several service and combatant command officials we spoke with believe that greater involvement is needed because of the role these organizations are likely to have in future joint operations. For example, these non-DOD federal agencies and departments would provide support (economic, diplomatic, and information actions) to U.S. military forces in their conduct of operations aimed at defeating an adversary’s war-making capabilities—support that is critical to implementation of the Joint Forces Command’s rapid decisive operations concept. Several DOD (service, combatant command, Office of the Secretary of Defense, and other DOD organizations) officials we spoke with believe that the Joint Forces Command should explore ways to boost the participation and involvement of allies and coalition partners in joint experimentation. Joint Forces Command officials agree and believe that such cooperation would foster a better understanding of allied perspectives, allow the Command to leverage concept development work, expand available capabilities, and facilitate the development of multinational capabilities. The Command recently created a multinational concept-development and experimentation site on its Intranet to facilitate the involvement of allies and coalition partners in joint experimentation. However, some DOD officials believe that the Joint Forces Command should do more because future U.S. military operations will likely be conducted with other countries. The officials stress that other nations’ military personnel should be included in experiments to develop new operational concepts, if these concepts are to be successful. Joint Forces Command officials pointed out, however, that the participation and involvement of other countries are often constrained by restrictions on access to sensitive security information. For example, North Atlantic Treaty Organization countries only participated as observers in Millennium Challenge 2002 because of security information restrictions. The Command, however, plans to develop ways to better handle these restrictions to allow greater participation by other nations in its next major field experiment in 2004. Nearly 4 years after the program was established, only three recommendations have flowed from the joint experimentation program, and none of them have been approved. Confusion about proposed changes in guidance regarding the information required for submitting these recommendations has partly delayed their approval. At the time we concluded our review, official guidance on what information should accompany joint experimentation recommendations had not been approved. In addition, several DOD officials expressed concern that the process used to review and approve recommendations, the same as that used for major acquisition programs, may not be the most appropriate for a program whose aim is to integrate changes quickly. However, the officials could not pinpoint any specific impasses in the approval process. The DOD officials are also concerned about potential delays in the integration of new concepts because of the lengthy DOD resource allocation process. The Joint Forces Command submitted one recommendation to the Chairman of the Joint Chiefs of Staff in August 2001 and two more in November 2001 (see table 2). At the time we ended our review, none of the recommendations had been approved. The recommendations to improve the planning and decision-making capabilities of joint forces and provide better training for personnel conducting theater missile defense operations were based on analyses of results of experiments carried out in the first 3 years of joint experimentation. Inputs included two major experiments: Millennium Challenge 2000 (live field experiment in August-September 2000) and the Unified Vision 2001 (virtual simulation experiment in May 2001). The first recommendation was submitted for review just 3 months after the end of the last experiment. According to a Joint Staff official, however, approval of the recommendations has been delayed because Joint Forces Command and Joint Staff officials were confused about proposed changes in guidance. In May 2001, the Joint Requirements Oversight Council proposed new guidance, which would require that information on costs and timelines be included in joint experimentation recommendations. Prior guidance did not require such information. Although the recommendations went through preliminary review by the Joint Staff, the omission was not caught until the recommendations were to be scheduled for review by the Joint Requirements Oversight Council. Joint Forces Command officials told us that they were not aware of the change in guidance until that time. When we ended our review, Joint Forces Command officials were working with the Joint Staff to assess how much data could be prepared and when. Command officials said that the recommendations will be resubmitted in fall 2002 together with other recommendations emerging from Millennium Challenge 2002. As a result, no recommendations have yet been reviewed or approved. Also, at the time we ended our review, the draft guidance on joint experimentation recommendations had not been approved and issued. This guidance will become especially important because joint experimentation is expected to produce new recommendations more rapidly as the program matures. The requirement for costs and timeline data is consistent with that of recommendations for major weapon-system-acquisition programs. However, joint experimentation officials at the Joint Forces Command believe that requiring this type of information on joint-experimentation recommendations may not be appropriate because (1) these recommendations are generally intended to convince decision makers to develop particular joint capabilities, not specific weapon systems; (2) the new requirement may slow the preparation of future recommendations; and (3) it will be difficult to provide accurate estimates of costs and timelines for recommendations that span further into the future. It is too early to determine whether these concerns are valid. Some DOD officials were also concerned that the system currently used to allocate resources to implement joint-experimentation recommendations—DOD’s Planning, Programming, and Budgeting System—may not be the most efficient because it usually takes a long time to review, approve, and provide funding in future budgets. A recommendation approved in 2002, for example, would not be incorporated into DOD’s budget until 2004 or even later. This delay could result in missed opportunities for more rapid implementation. A Joint Staff official told us that the Joint Staff and the Joint Forces Command recently adjusted the timing of events to better align the joint experimentation process with the Planning, Programming, and Budgeting System. Additionally, DOD established a special fund for the Joint Forces Command to use as a temporary funding source to speed up the implementation of certain critical or time-sensitive recommendations. This source will provide early funding for implementation until funding is provided through DOD’s Planning, Programming, and Budgeting System. However, Joint Forces Command and other DOD officials believe other ways to implement new joint capabilities within the framework of existing budget and oversight practices may need to be considered. DOD has been providing more specific and clearer guidance on its goals, expectations, and priorities for the joint experimentation program. Nevertheless, the management of joint experimentation is missing a number of key elements that are necessary for program success: some roles and responsibilities have not yet been defined; current performance measures are not adequate to assess progress; and the Joint Forces Command lacks strategic planning tools for the program. DOD officials stated that the joint experimentation program had difficulty in its first years because guidance was evolving and was not specific: DOD’s transformation goals were not adequately linked to transformation efforts, and roles and responsibilities were not clearly defined. Over time, the Secretary of Defense and the Chairman of the Joint Chiefs of Staff have provided more specific guidance on the goals and expectations for joint experimentation and its contribution to DOD’s transformation efforts. Guidance for joint experimentation has evolved gradually over the program’s nearly 4-year life span, partly because of shifting defense priorities and lack of clarity about the roles of various DOD stakeholders. Roles and responsibilities have also matured with the program. The Secretary of Defense’s 2001 Quadrennial Defense Review Reportestablished six transformation goals, which include improving U.S. capabilities to defend the homeland and other bases of operations, denying enemies sanctuary, and conducting effective information operations. According to DOD officials, the Secretary of Defense’s most recent planning guidance tasked the Joint Forces Command to focus its experimentation on developing new joint operational concepts for these goals. To begin meeting these goals, the Chairman has also provided the Joint Forces Command with clarifying guidance that identified specific areas for the Command to include in its experimentation, such as the development of a standing joint-force headquarters concept and of a prototype to strengthen the conduct of joint operations. The Command has reflected this new guidance in its latest Joint Concept Development and Experimentation Campaign Plan. Additionally, the Secretary of Defense reassigned the Command’s geographic responsibilities to focus it more clearly on its remaining missions, particularly transformation and joint experimentation. DOD officials at both headquarters and the field believe that the recent guidance begins to provide a better framework for the Joint Forces Command to establish and focus its joint experimentation efforts. Some officials, however, believe that future guidance should further clarify the link between joint experimentation and DOD priorities and the required resources necessary to support joint experimentation. DOD, in its comments to a draft of this report, stated that it expects the Transformation Planning Guidance—currently being prepared by the Office of the Secretary of Defense—will establish the requirements necessary to link experimentation to changes in the force. While roles and responsibilities for DOD organizations are now broadly defined, the new DOD Office of Force Transformation’s role in joint experimentation and its relationship to other stakeholders have not yet been clearly established. The Office’s charter or terms of reference have not been released. DOD plans to issue a directive later this year that will include a charter and description of the Office’s authorities and responsibilities. However, there is still uncertainty about the extent of authority and involvement the Office will have in the joint experimentation program and the Office’s ability to link the program with DOD’s overall transformation efforts. Joint Forces Command and other DOD officials consider having a transformation advocate in the Office of the Secretary of Defense as a beneficial link between the Joint Forces Command’s, the services’, and the combatant commands’ joint experimentation programs and DOD’s overall transformation agenda. According to DOD’s 2001 Quadrennial Defense Review Report, the Office of Force Transformation, created in November 2001, is to play a role in fostering innovation and experimentation and should have an important responsibility for monitoring joint experimentation and for providing the Secretary of Defense with policy recommendations. An Office of Force Transformation official told us that the Office will be an advocate for transformation and will help develop guidance and make recommendations on transformation issues to the Secretary of Defense (the Office provided comments on the Secretary’s annual planning guidance and developed instructions for the services on preparing their first transformation road maps). The Office has also decided to take a cautious approach in carrying out its mission because of possible resistance from other DOD organizations, the same official said. The Office plans to offer its assistance to DOD organizations in their transformation efforts and attempt to influence their thinking on key issues, rather than asserting itself directly into their efforts, for example by funding military use of existing private-sector technology to act as a surrogate for evaluating possible concepts, uses, and designs. Joint Forces Command officials stated that as of May 2002, they had had only limited discussions with the Office and had not established any working agreements on how the Office would participate in the joint experimentation program. The Office of Force Transformation has only recently assembled its staff and is beginning to plan its work and establish contacts within DOD and with other organizations. The Office’s budget for fiscal years 2002 and 2003 is about $18 million and $35 million, respectively. DOD’s performance measures (or metrics) for assessing joint experimentation—by measuring only the number of experiments carried out—do not provide a meaningful assessment of the program’s contribution toward meeting its performance goal for military transformation because they are only quantitative. Consistent with good management practices and in order to effectuate the purposes of the Government Performance and Results Act of 1993, federal agencies devise results-oriented metrics that provide an assessment of outcomes or the results of programs as measured by the difference they make. In its fiscal year 2000 performance report, the most recent it has issued, DOD described the performance indicators for the joint experimentation program in terms of the number of experiments conducted against a target goal for the prior, current, and following fiscal years. In fiscal year 2000, DOD exceeded its target number of experiments and did not project any shortfalls in meeting its target in the next fiscal year. Although this measure does provide a quantitative assessment of experimental activity, it does not provide a meaningful method for assessing how joint experimentation is helping to advance military transformation. An Office of the Secretary of Defense official stated that DOD recognizes that better performance measures are needed for assessing how joint experimentation advances transformation and for two other metrics currently used to assess its military transformation goal. The official stated that developing such measures is a challenge because joint experimentation does not easily lend itself to traditional measurement methods. For example, most programs consider a failure as a negative event, but in joint experimentation, a failure can be considered as a success if it provides insights or information that is helpful in evaluating new concepts or the use of new technologies. An Office of the Secretary of Defense official told us that the RAND Corporation and the Institute for Defense Analyses recently completed studies to identify possible performance measures for assessing the progress of transformation. DOD is evaluating them and is preparing the Transformation Planning Guidance to provide more specific information on the priorities, roles, and responsibilities for executing its transformation strategy. The same official stated that the new guidance will include a discussion of the types of performance measures needed for assessing transformation progress or will assign an organization to determine them. In either case, measures will still need to be developed and implemented. DOD plans to issue the new guidance later in 2002 but has not determined how new performance measures would be incorporated into its annual performance report. The Joint Forces Command has not developed the strategic planning tools—a strategic plan, an associated performance plan, and performance- reporting tools—for assessing the performance of the joint experimentation program. Strategic planning is essential for this type of program, especially considering its magnitude and complexity and its potential implications for military transformation. Such planning provides an essential foundation for defining what an organization seeks to accomplish, identifies the strategy it will use to achieve desired results, and then determines—through measurement—how well it is succeeding in reaching results-oriented goals and achieving objectives. Developing strategic-planning tools for the joint experimentation program would also be consistent with the principles set forth in the Government Performance and Results Act of 1993, which is the primary legislative framework for strategic planning in the federal government. The Joint Forces Command prepares an annual Joint Concept Development and Experimentation Campaign Plan that broadly describes the key goals of its program, the strategy for achieving these goals, and the planned activities. However, a February 2002 progress report, prepared by the Joint Forces Command’s Joint Experimentation Directorate, on the development of the Directorate’s performance management system indicated that one-fourth of those organizations providing feedback on the Campaign Plan believed that the Plan lacks specificity in terms of the program’s goals and objectives and an associated action plan that outlines the activities to be carried out in order to achieve those goals. Officials we spoke with at the military services, the combatant commands, and the Joint Forces Command all cited the need for more specific and clearer goals, objectives, and performance measures for the program. In the progress report, the Command acknowledged the benefits of strategic planning and the use of this management tool to align its organizational structure, processes, and budget to support the achievement of missions and goals. The report proposed that the Command develop a strategic plan, possibly by modifying its annual Campaign Plan, and subsequently prepare a performance plan and a performance report. Command officials indicated that the basic requirements of a strategic plan could be incorporated into the Campaign Plan, but they were unsure, if such an approach were taken, whether the changes could be made before the annual Campaign Plan is issued later this year. Similarly, the Joint Forces Command has had difficulty in developing specific performance measures for joint experimentation. A Command official stated that the Command has tried to leverage the performance measures developed by other organizations like itself, but found that there is widespread awareness throughout the research and development community, both within and outside DOD, that such measures are needed but do not exist. Additionally, a Joint Forces Command official stated that whatever metrics the Command develops must be linked to its mission-essential tasks for joint experimentation and that the Command is currently developing these tasks. At the time we ended our review, the Command had identified six broad areas for which specific metrics need to be developed. These included quality of life, customer relationships, and experimentation process management. After nearly 4 years, the Joint Forces Command’s process for obtaining inputs for the development and execution of DOD’s joint experimentation program has become more inclusive. However, questions continue about whether the program is the successful engine for change envisioned when it was established. Since the program’s inception, only three recommendations have flowed from experimentation activities and their review, approval, and implementation have been delayed from confusion over a change in guidance that required additional information be included in the recommendations. As a result, no recommendations for change have been approved or implemented to date. To the extent that the draft guidance on what should be submitted with joint experimentation recommendations can be officially approved and issued, future recommendations could be submitted for approval and implementation more quickly. Underscoring the need to finalize the guidance is the anticipated recommendations to be made after this year’s major field experiment, Millennium Challenge 2002. The lack of strategic planning for joint experimentation deprives the Joint Forces Command of necessary tools to effectively manage its program. Implementation of strategic planning at the Joint Forces Command would create a recurring and continuous cycle of planning, program execution, and reporting and establish a process by which the Command could measure the effectiveness of its activities as well as a means to assess the contributions of those activities to the operational goals and mission of the program. Such planning could also provide a tool—one that is currently missing—to identify strengths and weaknesses in the development and execution of the program and a reference document for the effective oversight and management of the program. Performance measures developed under the Command’s strategic planning could provide the standard for assessing other experimentation efforts throughout DOD, which are also lacking such metrics. The lack of a meaningful performance measure for assessing the contribution of the joint experimentation program to advance DOD’s transformation agenda limits the usefulness and benefit of this management tool to assist congressional and DOD leaders in their decision-making responsibilities. Establishing a “meaningful” joint experimentation performance measure for its annual performance report would provide congressional and DOD leadership a better assessment of the program’s contribution and progress toward advancing transformation. Such a metric would also be consistent with the intent of the Results Act to improve the accountability of federal programs for achieving program results. Because the role and relationships of the Secretary of Defense’s new Office of Force Transformation have not yet been clarified, the Secretary may not be effectively using this office in DOD’s transformation efforts. This office, if given sufficient authority, could provide the Secretary with a civilian oversight function to foster and monitor the joint experimentation program to ensure that it is properly supported and provided resources to advance the DOD’s overall transformation agenda. Rectifying these shortcomings is critical in view of the importance that DOD has placed on joint experimentation to identify the future concepts and capabilities for maintaining U.S. military superiority. To improve the management of DOD’s joint experimentation program, we recommend that the Secretary of Defense direct the Chairman of the Joint Chiefs of Staff to approve and issue guidance that clearly defines the information required to accompany joint experimentation recommendations for the Joint Requirements Oversight Council’s review and approval and require the Commander in Chief of the U.S. Joint Forces Command to develop strategic planning tools to use in managing and periodically assessing the progress of its joint experimentation program. We further recommend that the Secretary of Defense develop both quantitative and qualitative performance measures for joint experimentation in DOD’s annual performance report to provide a better assessment of the program’s contribution to advancing military transformation and clarify the role of the Office of Force Transformation and its relationship to the Chairman of the Joint Chiefs of Staff, the Joint Forces Command, and other key DOD stakeholders in DOD’s joint experimentation program. We received written comments from DOD on a draft of this report, which are included in their entirety as appendix III. DOD agreed with our recommendations and indicated that it expects that a forthcoming Transformation Planning Guidance and subsequent guidance will be responsive to them by clarifying roles and missions across DOD, implementing recommendations for changes, and establishing clear objectives. We believe such strategic guidance from the Secretary of Defense could provide a significant mechanism for better linking and clarifying the importance of the joint experimentation program with DOD’s transformation agenda. DOD also provided technical comments to the draft that were incorporated in the report where appropriate. To determine the extent to which the Joint Forces Command obtains input from stakeholders and other relevant sources in developing and conducting its joint experimentation activities, we reviewed an array of documents providing information about participants in joint experimentation, including guidance and other policy documents, position papers, fact sheets, reports, and studies of the military services, the combatant commands, the Joint Staff, and other DOD organizations. We also reviewed Joint Forces Command plans and reports. Additionally, we made extensive use of information available on public and DOD Internet web sites. To assess the change in participation by various stakeholders over time, we compared the differences in the numbers of participating organizations and initiatives provided by these organizations between the Joint Forces Command’s first two major field experiments in 2000 and 2002 (Millennium Challenge 2000 and Millennium Challenge 2002). We conducted discussions with officials at five combatant commands, the Joint Staff, the military services, and other DOD organizations, such as the Joint Advanced Warfighting Program and the Defense Advanced Research Projects Agency. Appendix IV lists the principal organizations and offices where we performed work. At the Joint Forces Command, we discussed with joint experimentation officials the process for soliciting and incorporating inputs for joint experimentation from the military services and the combatant commands. We also attended conferences and other sessions hosted by the Joint Forces Command to observe and learn about joint experimentation participants and their contributions and coordination. For example, we attended sessions for the Command’s preparation of its annual Joint Concept Development and Experimentation Campaign Plan and planning for this year’s Millennium Challenge experiment. With officials from each of the services and the combatant commands, we discussed perceptions of the effectiveness of coordination and participation in joint experimentation. We also obtained observations about participants’ involvement from several defense experts who track joint experimentation and military transformation. Although we did not include a specific assessment of the individual experimentation efforts of the services and combatant commands, we did discuss with service and command officials how their efforts were coordinated and integrated into joint experimentation. We also did not determine the extent that individual inputs obtained from various participating organizations were considered and incorporated into the joint experimentation program. To determine the extent to which recommendations flowing from the joint experimentation process have been approved and implemented, we reviewed and analyzed data that tracked the progress of the first three joint experimentation recommendations submitted by the Joint Forces Command. We also obtained and analyzed relevant guidance and held discussions with Joint Staff, Joint Forces Command, and Office of the Secretary of Defense officials on the Joint Requirements Oversight Council process for reviewing and approving joint experimentation recommendations. We also discussed issues relating to implementation of joint experimentation recommendations through DOD’s Planning, Programming, and Budgeting System. To assess whether key management elements, such as policy, organization, and resources, were in place for the program, we conducted a comprehensive review of current legislative, policy, planning, and guidance documents and reports and studies. We used the principles laid out in the Government Performance and Results Act of 1993 as an additional benchmark for assessing the adequacy of performance measures established for the program and of tools used to manage the program. We also discussed the status and evolution of joint experimentation oversight and management, including office roles and responsibilities and joint experimentation metrics, with officials at the Joint Forces Command, the Joint Staff, the services, the combatant commands, the Office of the Secretary of Defense, the Office of Force Transformation, and other DOD organizations. Several defense experts who follow joint experimentation and military transformation discussed with us joint experimentation oversight and management and gave us their impressions regarding current joint experimentation management practices. Our review was conducted from October 2001 through May 2002 in accordance with generally accepted government auditing standards. We are sending copies of this report to interested congressional committees, the Secretary of Defense, the Chairman of the Joint Chiefs of Staff, and the Commander in Chief, U.S. Joint Forces Command. We will also make copies available to others upon request. In addition, this report will be available at no charge on the GAO Web site at http://www.gao.gov. Please contact Richard G. Payne at (757) 552-8119 if you or your staff have any questions concerning this report. Key contacts and contributors to this report are listed in appendix V. DOD’s Report of the Quadrennial Defense Review issued. Secretary of Defense designated Commander in Chief, U.S. Joint Forces Command, as executive agent for joint experimentation. Joint Advanced Warfighting Program established. Relevance to joint experimentation This vision of future war fighting provides a conceptual template for the Department of Defense’s (DOD) transformation efforts across all elements of the armed forces. Report discussed the importance of preparing for future national security challenges. It concluded that DOD needed to institutionalize innovative investigations, such as war-fighting experiments, to ensure future concepts and capabilities are successfully integrated into the forces in a timely manner. The Secretary of Defense tasked the Joint Forces Command to design and conduct joint war-fighting experimentation to explore, demonstrate, and evaluate joint war-fighting concepts and capabilities. DOD established the program at the Institute for Defense Analyses to serve as a catalyst for achieving the objectives of Joint Vision 2010 (and later Joint Vision 2020). To that end, the program is to develop and explore breakthrough operational concepts and capabilities that support DOD’s transformation goals. Joint concept development and experimentation program initiated. Joint Forces Command assumed responsibility as the executive agent for joint experimentation. Joint Advanced Warfighting Program conducted the first joint experiment for Joint Forces Command. An experiment—J9901—that investigated approaches for attacking critical mobile targets. Experiment allowed the Joint Forces Command to begin its learning process on how to conduct joint experimentation. Report proposed several recommendations to promote military transformation. Report of the Defense Science Board Task Force on DOD Warfighting Transformation issued. Chairman of the Joint Chiefs of Staff issued Joint Vision 2020. Millennium Challenge 2000 major field experiment conducted. Chairman of the Joint Chiefs of Staff issued updated Joint Vision Implementation Master Plan. Transformation Study Report: Transforming Military Operational Capabilities issued. Joint Forces Command conducted Unified Vision 2001 experiment. Secretary of Defense’s planning guidance issued. DOD’s Quadrennial Defense Review Report issued. Updated vision statement described the joint war-fighting capabilities required through 2020. The first major field experiment coordinated by the Joint Forces Command among the services and other stakeholders. Guidance described the process for generation, coordination, approval, and implementation of recommendations emerging from joint experimentation and defined the roles and responsibilities of DOD stakeholders. Study conducted for the Secretary of Defense to identify capabilities needed by U.S. forces to meet the twenty-first century security environment. Made several recommendations directed at improving joint experimentation. A major joint experiment—largely modeling and simulation— conducted to refine and explore several war-fighting concepts, such as “rapid decisive” operations. Required studies by defense agencies and the Joint Staff to develop transformation road maps and a standing-joint-force headquarters prototype. The report established priorities and identified major goals for transforming the Armed Forces to meet future challenges. It called for new operational concepts, advanced technological capabilities, and an increased emphasis on joint organizations, experimentation, and training. Event Chairman of the Joint Chiefs of Staff issued joint experimentation guidance. Office of Force Transformation established. Unified Command Plan 2002 issued. Secretary of Defense’s planning guidance issued. Joint Forces Command conducted Millennium Challenge 2002. Relevance to joint experimentation The guidance directed the Joint Forces Command to focus its near- term experimentation on developing a standing joint force headquarters prototype. Office assists the Secretary of Defense in identifying strategy and policy, and developing guidance for transformation. Plan reduced the number of missions assigned to the Joint Forces Command to allow the Command to devote more attention to its remaining missions such as joint experimentation. The guidance directed the Joint Forces Command to develop new joint concepts that focus on the six transformation goals set forth in the 2001 Quadrennial Defense Review Report. Second major field experiment conducted to culminate a series of experiments to assess “how” to do rapid decisive operations in this decade. The Joint Forces Command uses various types of assessment activities to develop, refine, and validate joint concepts and associated capabilities. As shown in figure 3, the Command begins to move through the five joint concept development phases by conducting workshops, seminars, and war games to develop information and identify possible areas to explore in developing new concepts and associated capabilities and then uses simulated or live experiment events to confirm, refute, or modify them. These activities vary in scale and frequency, but each activity becomes larger and more complex. They can involve a small group of retired flag officers and academics, up to 100 planners, operators, and technology experts, or several thousand in the field. Near the end of the process, the Command will conduct a large-scale simulation experiment (such as Unified Vision 2001), followed by a major field experiment (such as Millennium Challenge 2002). The process continuously repeats itself to identify additional new concepts and capabilities. Table 3 provides additional information about the characteristics, scale, and frequency of these and other associated activities and experiments. Office of the Secretary of Defense, Program Analysis and Evaluation Office of the Under Secretary of Defense for Policy Office of the Under Secretary of Defense for Acquisition, Technology, Joint Advanced Warfighting Program Defense Advanced Research Project Agency Office of Force Transformation Operational Plans and Interoperability Directorate Joint Vision and Transformation Division Command, Control, Communications, and Computers Directorate Force Structure, Resources, and Assessment Directorate Directorate of Training Directorate of Integration Directorate for Strategy, Concepts, and Doctrine Office of the Deputy Chief of Naval Operations for Warfare Marine Corps Combat Development Command Department of the Air Force Booz Allen Hamilton The Carlyle Group Center for Strategic and Budgetary Assessments Hicks & Associates, Inc. In addition to the individuals named above, Carol R. Schuster, Mark J. Wielgoszynski, John R. Beauchamp, Kimberley A. Ebner, Lauren S. Johnson, and Stefano Petrucci made key contributions to this report.
The Department of Defense (DOD) considers the transformation of the U.S. military a strategic imperative to meet the security challenges of the new century. In October 1998, DOD established a joint concept development and experimentation program to provide the engine of change for this transformation. In the nearly 4 years since becoming the executive agent for joint concept development and experimentation, the Joint Forces Command has increased in participation of key DOD stakeholders--the military services, the combatant commands, and other organizations and agencies--in its experimentation activities. The Command has also expanded the participation of federal agencies and departments, academia, the private sector, and some foreign allies. No recommendations flowing from joint experimentation have been approved or implemented. Although the Joint Forces Command issued three recommendations nearly a year ago, they were not approved by the Joint Requirements Oversight Council because of confusion among the Joint Staff and the Joint Forces Command about a proposed change in guidance that required additional data be included when submitting these recommendations. Although DOD has been providing more specific and clearer guidance for joint experimentation, DOD and the Joint Forces Command are missing some key management elements that are generally considered necessary for successful program management.
To identify whether factors limit VA’s ability to recover more of its billed charges, we studied the recovery programs at VA’s Martinsburg, West Virginia, and Washington, D.C., medical centers. We selected the two medical centers in consultation with officials working in VA’s Medical Care Cost Recovery (MCCR) program. The Martinsburg medical center was selected because it was (1) viewed by VA officials as operating an efficient recovery program and (2) 1 of 10 medical centers participating in MCCR’s reengineering pilot project. The recovery program at the Washington DC’s medical center was chosen for contrast. Although Martinsburg’s medical center is much smaller than Washington’s, it was recovering roughly the same amount from private health insurance. At the two medical centers, we examined a random sample of the bills VA had submitted to insurers during May 1994. We focused our statistical analyses on bills for which VA had completed recovery actions (closed bills). For each bill, we examined the insurers’ explanation of benefits, VA’s patient insurance information, and VA’s financial tracking reports to determine (1) why insurers denied or partially paid VA bills and (2) what actions VA had taken to determine whether additional funds could be recovered. We discussed bills denied or partially paid for administrative or other nonclinical reasons with VA staff at the medical centers to find out which factors had affected recoveries. To the extent possible, we reviewed, with the assistance of a registered nurse, the discharge summaries for those inpatient claims denied for clinical reasons to determine whether insurers’ denials were appropriate. We used appropriateness-of-care criteria developed by InterQual, a utilization review firm, in our assessments. We confirmed the prevalence of these findings with MCCR staff at other facilities and in VA’s headquarters. To evaluate VA’s ability to achieve its revenue targets, we reviewed (1) VA’s fiscal year 1998 budget submission, the MCCR program’s 1996 business plan, and other documents detailing VA’s health care restructuring plans, such as VA’s Prescription for Change, new budget allocation system, and its use of performance measures; (2) interviewed MCCR and health care staff from VA facilities and VA’s headquarters; and (3) interviewed staff and reviewed documents from VA’s General Counsel and Regional Counsels. To assess the application of discretionary veterans’ copayments to their third-party liability, we reviewed VA’s General Counsel decisions and discussed the implementation of these decisions with VA’s General Counsel as well as MCCR staff from central office and VA facilities. We did our work between May 1995 and July 1997 in accordance with generally accepted government auditing standards. VA collects money from third-party insurers and directly from some veterans to offset the cost of providing health care services for veterans’ nonservice-connected conditions. Until recently, these moneys, other than amounts needed to operate the recovery program, have been returned to the U.S. Treasury. In fiscal year 1996, the MCCR program retained almost $119 million to offset the costs of operating the recovery program and deposited $455 million in the Treasury. With passage of the Balanced Budget Act of 1997 (P.L. 105-33), VA will retain amounts collected after June 30, 1997, to supplement its annual appropriations and finance the cost of serving additional veterans. While the law prevents insurers from arbitrarily denying payment to VA for services that would be covered in private sector facilities, VA, like other health care providers, must generally comply with the terms and conditions set out in veterans’ health insurance policies. Insurance policies typically contain a number of provisions that limit the amount of billed charges that the insurer is responsible for paying. In addition to requiring that care be medically necessary and provided in an appropriate setting, policies may require the policyholder to pay a specified amount (such as $500), referred to as a deductible or out-of-pocket payment, for covered health care services before the insurance begins paying; require policyholders to pay a certain percentage of covered charges, known as a copayment or coinsurance; specify what services are covered and any limits on the days of coverage or frequency of services; require, as a condition for payment, that providers or policyholders obtain prior approval from the insurer before admission to a hospital; preclude or reduce payment (other than for emergency care) to providers that are not members of HMOs, preferred provider organizations (PPO), or point-of-service (POS) plans; and “wrap around” other insurance coverage and pay only that portion of approved charges not paid by the primary insurance, such as Medicare. Unlike most providers, however, VA does not bill health plans for the individual tests and procedures it provides to its policyholders. Instead, VA prepares bills based on its average costs for providing a day of hospital care and an outpatient visit. In fiscal year 1997, VA billed insurers $1,046 per day for inpatient care provided in medical bed sections, $1,923 per day for care provided in surgical bed sections, $194 for each outpatient visit, and $20 for each prescription refill. In other words, the amount VA bills insurers for a 5-day surgical stay is the same regardless of the type of surgery performed. Similarly, it bills the same amount for an outpatient visit regardless of the types or number of services provided during that visit. In fiscal year 1995, VA recovered $522.8 million from third parties, including private health insurers, workers’ compensation programs, and no-fault insurance. Recoveries declined to $495.2 million in fiscal year 1996 and to $213.4 million during the first two quarters of fiscal year 1997. VA’s 1998 budget proposal requested medical care funding of $17.6 billion, consisting of an appropriation of almost $17 billion and a legislative proposal to retain insurance payments, veterans’ copayments, and other third-party reimbursements estimated to total about $600 million in fiscal year 1998. VA proposed to freeze its appropriation at about $17 billion over the next 5 years and rely instead on increased efficiency and increases in third-party reimbursements to offset the effects of inflation. VA estimated that the third-party recovery authority would enable it to generate $1.7 billion in additional revenues in 2002, including $826 million from private health insurance. The Balanced Budget Act of 1997 authorized VA to retain recoveries and collections after June 30, 1997. The act provides that if the amounts recovered in fiscal years 1998, 1999, or 2000 fall short of projections by at least $25 million, VA will receive an additional appropriation. Most VA bills in our sample prepared by the Martinsburg and Washington, D.C., medical centers that were denied or reduced by private health insurers were appropriately closed by MCCR staff with little additional recovery possible. Additional amounts were generally not recoverable because insurers deemed VA’s inpatient care to be medically inappropriate; VA billed Medicare supplemental insurance for the full cost of VA services, even though such insurance generally pays only the Medicare inpatient deductible and about 20 percent of the costs of outpatient care; VA billed HMOs and other managed care plans for nonemergency care when the VA facility was not a participating provider; and insurers reduced payments to VA on the basis of the insurance plans’ cost-sharing requirements. In addition, VA’s pursuit of additional recoveries is hindered because (1) VA has limited knowledge of veterans’ insurance policies and terms of coverage and (2) many insurers continue to use exclusionary clauses denying payment for care given in VA facilities, although such clauses have no legal effect. Nearly 30 percent of the unpaid charges for inpatient care at the Martinsburg and Washington, D.C., medical centers resulted from insurers’ determinations that the care was medically inappropriate. Medically inappropriate care includes care deemed to be medically unnecessary, excessive lengths of stay, and care that should have been provided on an outpatient basis. For example, insurers denied or reduced claims at the Martinsburg medical center for cataract surgeries that were unnecessarily performed on an inpatient basis. Similarly, insurers denied or reduced claims when the medical center allowed veterans who traveled considerable distances for care to be admitted early, to be discharged later, or to receive inpatient treatment instead of outpatient services. When such claims were reduced or denied, the Martinsburg medical center often negotiated payment for ancillary and professional services associated with inpatient care that should have been provided on an outpatient basis. In addition, the medical center was sometimes able to obtain payment for clinic visits and ancillary services provided to veterans in the medical center’s nursing home and domiciliary beds. Such payments, however, accounted for significantly less than half of billed charges. Unlike Martinsburg, the Washington, D.C., medical center generally did not vigorously pursue claims denied as medically inappropriate and seldom pursued payment of ancillary or professional services. Since the period covered by our claims review, however, the Washington medical center has brought in a new manager for the recovery program and has reorganized its functions to strengthen the billing, collection, and appeals processes. Billing and collections staff are no longer in separate units but are paired together in teams to expedite recovery actions. An August 1996 review of the Washington, D.C., program by MCCR staff from headquarters and other VA facilities, however, identified the need for further improvements. For example, the reviewers suggested that the facility (1) develop a mechanism to consistently track patients’ inpatient and outpatient treatments, (2) identify a physician adviser and establish a multidisciplinary utilization review committee to assist with appeals, and (3) develop procedures to identify and bill for professional fees where appropriate. VA bills Medicare supplemental insurers for the full cost of VA services, even though such policies provide coverage that is secondary to Medicare. And because it does not have the authority to bill Medicare, VA does not receive a determination of benefits—either a “remittance advice” or an “explanation of benefits.” Medicare supplemental insurers typically use such determinations to calculate their liability. In the absence of a Medicare determination of benefits, supplemental insurers use different methods to determine their liability as secondary payers. For inpatient care, these insurers typically pay the Medicare inpatient deductible ($760 in 1997) or the inpatient deductible plus 20 percent of the professional services component of the VA per diem rate. As a result, VA recovers only a small percentage of its billed charges. For example, VA can expect to recover only $760 to $835 for a 3-day VA hospital stay for which it billed $3,138. Similarly, for an outpatient visit, Medicare supplemental insurers typically pay no more than 20 percent of the billed charges. The largest Medigap insurer, the American Association of Retired People (AARP), however, no longer pays VA 20 percent of billed charges for most of its veteran policyholders. In September 1995, AARP began paying VA 20 percent of what it estimates Medicare would have paid for the service in a physician’s office for veterans in the mandatory care category (primarily those with service-connected disabilities or low incomes). AARP continues to pay 20 percent of VA’s billed charges for veterans in the discretionary care category (those with no service-connected disabilities and incomes above the “means-test” level, which is about $21,000 for a single veteran) who are subject to copayments to cover their out-of-pocket costs. The effect of this change on VA recoveries is unclear. For veterans in the mandatory care category who are not subject to VA’s copayments, recoveries are likely to decrease for outpatient bills involving routine office visits. For example, because Medicare pays about $54 for a routine office visit, AARP would pay VA less than $11 for such care under the new policy rather than almost $39 (20 percent of VA’s $194 outpatient rate) it would have paid under the old policy. On the other hand, to the extent that VA provides these veterans high-cost services or procedures, such as cataract surgery, as outpatient services, AARP’s payments to VA should increase under the new policy. (See fig. 1.) VA’s right to collect from Medigap insurers was upheld in earlier court decisions. However, some insurers still contend that they are not liable for the amounts sought by VA until they receive an adjudicated claim indicating what portion of VA’s bill is covered by Medicare. The insurers claim that calculating the amount owed to VA is unduly burdensome because VA does not give them a Medicare remittance advice or explanation of benefits explaining Medicare-approved charges. When the matter is resolved, VA expects to recover on the backlog of claims submitted to insurers involved in the case. Like other Medicare supplemental policies, however, these plans would pay secondary rather than primary benefits, and therefore most of VA’s billed charges would not be collectable. Because the Martinsburg medical center was billing Medicare supplemental insurance more often than the Washington medical center, a larger percentage of unpaid charges resulted from denials and reduced payments by such insurers. About a third of Martinsburg’s unpaid charges for inpatient care are attributable to billing Medicare supplemental insurers. By contrast, only about 13 percent of the Washington, D.C., medical center’s unpaid inpatient charges are attributable to billing Medicare supplemental insurance. Similarly, about 40 percent of the unpaid outpatient charges for the Martinsburg medical center resulted from billing Medicare supplemental insurance for the full cost of outpatient care. Because the Washington, D.C., medical center did not bill Medicare supplemental insurance as extensively, only 11 percent of its unpaid charges resulted from billing the full cost of outpatient care. HMOs and certain other managed care plans generally will not pay a nonparticipating provider for services rendered to their policyholders, except for emergency care. Neither the Washington, D.C., nor the Martinsburg medical center has been able to negotiate provider agreements with any HMOs (see pp. 27-31.). About 19 percent of the claims denied by insurers for inpatient care provided by the Washington, D.C., medical center, representing over 20 percent of the center’s unpaid inpatient charges, were billed to HMOs and other managed care plans that limit payments for nonemergency care to participating providers. About 19 percent of the bills insurers denied for outpatient care provided by the Washington, D.C., medical center, representing about 35 percent of unpaid charges, were billed to HMOs and other managed care plans that limit payments for nonemergency care to participating providers. Because VA could not provide support that the care was for a medical emergency, the medical center had no basis for pursuing collection. Denials by HMOs and certain other managed care plans did not account for as much of the unpaid care at the Martinsburg medical center because that facility generally did not bill managed care plans unless it was fairly certain that the plan would pay for VA care. In addition, HMOs appear to have a significantly lower market penetration in the Martinsburg area than they do in the Washington, D.C., area. About 4 percent of Martinsburg’s inpatient bills (representing about 3 percent of the unpaid charges) were billed to managed care plans. About 3 percent of outpatient bills (representing about 6 percent of unpaid charges) were billed to managed care plans. For both the Washington, D.C., and Martinsburg medical centers, insurers often reduced their payments to VA on the basis of the policies’ cost-sharing provisions. Insurance policies typically require policyholders to pay a certain amount for health care services out of pocket before coverage begins. Such deductibles can either be a yearly amount or apply to a specific episode of care such as a hospital stay. In addition, policies frequently require policyholders to pay a certain percentage of charges (a copayment or coinsurance). These provisions limit the insurers’ liability to that portion of covered charges that is not the responsibility of the policyholder. Insurers reduced payments to the Washington, D.C., medical center on the basis of cost sharing provisions for over half of the inpatient bills and over 70 percent of the outpatient bills we examined. Similarly, insurers reduced payments for about 31 percent of the inpatient bills and about 56 percent of the outpatient bills we examined at the Martinsburg medical center for the same reason. Reductions because of cost-sharing provisions had a greater impact on recoveries from outpatient bills, accounting for about a third of the unpaid outpatient charges at Martinsburg medical center and about 43 percent of unpaid charges at Washington. By contrast, reductions on the basis of cost-sharing requirements accounted for less than 10 percent of unpaid inpatient charges at each facility. Since our study period, the percentage of outpatient charges that are unpaid because of cost-sharing requirements has probably increased because of the trend in fee-for-service health plans toward higher copayments. For example, the Blue Cross and Blue Shield standard option plan under the Federal Employees Health Benefits Program (FEHBP) increased the copayment on a $194 outpatient bill from approximately $49 to $100 between 1994 and 1996. VA’s efforts to pursue recoveries from private health insurers are hindered by VA’s limited access to the terms and conditions of veterans’ insurance policies. Neither the insurer nor the veteran is required to supply a copy of the health benefit plan to VA. As a result, VA generally relies on telephone calls to insurers to obtain information on the specific provisions of veterans’ policies. MCCR field staff indicated that insurers frequently refuse to give them copies of veterans’ policies, benefit summaries, or booklets when requested, citing privacy concerns. Although VA’s General Counsel indicates that insurers can be compelled to provide contracts and policy information during litigation, such extreme actions have seldom been used. However, VA’s General Counsel has been able to obtain more than 300 policies from health insurers that are seeking refunds for what they claim are overpayments. MCCR staff at the Martinsburg and Washington, D.C., medical centers generally billed insurers to identify what services were covered, what policy restrictions existed, and where the veteran stood in relation to annual deductibles or lifetime limits on benefits. Depending on the information contained in the insurers’ remittance advice, MCCR staff made follow-up telephone calls to see why payments were denied or reduced. For example, after repeated telephone conversations with an employer-sponsored health benefit plan, MCCR staff at the Martinsburg medical center discovered that outpatient bills that had been denied for apparent coverage limitations were in fact payable if billed on a different form. On the basis of that information, the facility was able to obtain additional recoveries by resubmitting previously denied outpatient claims for other veterans. VA’s other potential source of policy information—the veteran policyholder—has little incentive to give VA detailed information about insurance coverage. If a private sector provider has trouble obtaining payment from an insurer, the policyholder is generally liable for any unpaid charges and thus has a financial incentive to see that insurance pays the maximum benefit in accordance with the plan provisions. Because most veteran policyholders obtaining services from VA facilities do not have any financial liability for their care, they have little incentive to intercede on VA’s behalf in obtaining detailed policy information. On the other hand, veterans in the discretionary care category have some financial incentive to help VA obtain information about their insurance coverage because a portion of insurers’ payments is used to reduce their copayments. For example, the Washington, D.C., medical center was able to obtain payment from one employer-sponsored managed care plan after a veteran in the discretionary care category gave VA information on policy provisions indicating that his insurance would pay nonparticipating providers. Before VA’s recovery authority was established, most health insurance plans and contracts contained exclusionary clauses indicating that the plans would not pay for care (1) provided in VA hospitals or (2) provided at no cost to the policyholder. Such exclusionary clauses were eliminated as a legal basis for denying payment of VA claims as part of the Comprehensive Omnibus Budget Reconciliation Act of 1986 (P.L. 99-272). Ten years later, however, exclusionary clauses that prohibit payment to federal facilities appear to be fairly common. Although such clauses no longer have any legal effect on VA recoveries, they can delay recoveries and increase the cost of recovery actions. Follow-up actions by VA staff, including VA’s regional and general counsels, may be necessary to challenge the clauses and enable VA to recover from the health plans. The regulation of insurance is primarily a state function. Insurance policies, but not ERISA plans, must generally be reviewed and approved by a state insurance commissioner before they can be offered for sale in the state. For example, the Maryland Insurance Commission reviews all policies approved for sale in Maryland. An official from the Maryland Insurance Commission confirmed that the Commission continues to approve policies containing clauses excluding payment for services provided in VA facilities. Commission staff told us that such clauses are common in health insurance policies sold in the state, and they expressed a willingness to work with VA officials to help eliminate the clauses. VA MCCR officials indicated that they rely on federal enforcement to require insurers to pay and have not attempted to work with state insurance commissions to remove exclusionary clauses. Officials from one of the largest health plans in Maryland—Blue Cross—confirmed that their plans still contain exclusionary clauses. They told us that the language in the exclusionary clauses will be revised in future policies to make it clear that the insurer will pay for care VA provides for nonservice-related conditions. Many factors help explain the decline in VA recoveries from private health insurance since fiscal year 1995 and make it likely that, without significant changes in the recovery program and/or an increase in the number of VA users with fee-for-service insurance coverage, declines will continue over the next 5 years. These factors include the declining and aging of the veteran population, increased enrollment in HMOs and other managed care plans, changes in how insurers process VA claims, shifts in care from inpatient to outpatient settings, and difficulty identifying care provided to veterans with service-connected disabilities for treatment of nonservice-connected conditions. The veteran population is projected to decline from 26.2 million to 23.6 million between 1995 and 2002. This means that VA would have to increase the percentage of veterans using VA services just to maintain current workload. In 1995, VA facilities provided services to about 2.6 million veterans, or roughly 1 out of 10 veterans. With fewer veterans, VA will need to attract roughly 1 of every 9 veterans in 2002 to maintain its current workload. To attain its goal of increasing by 20 percent the number of veterans using VA services by 2002, VA will have to attract more than one out of every eight veterans in 2002. (See fig. 2.) Just as the declining numbers of veterans will make it more difficult to maintain recoveries, so too will the aging of the veteran population. As an increasing proportion of veterans become eligible for Medicare, potential recoveries decrease. Between 1995 and 2002, the percentage of veterans aged 65 and older is expected to increase from 34 to 39. This is important because at age 65, most veterans’ private health insurance becomes secondary to Medicare. Currently, about 60 percent of veterans who have health insurance and who are treated by VA are over 65 years of age. Typically, Medicare supplemental plans cover only the $760 deductible for the first 60 days of inpatient care and 20 percent of the outpatient charge. An increase in the percentage of insured veterans covered only by Medicare supplemental policies is thus likely to decrease future recoveries. Continued increases in enrollment in HMOs, PPOs, and POS plans are likely to reduce future VA recoveries from private health insurance. VA has had limited success in negotiating to become a participating provider under HMOs (see pp. 27-31) and therefore is generally unable to recover any of its costs of providing routine care to HMO members. Between 1982 and 1994, enrollment in HMOs increased from 9 million to over 50 million. Similarly, because VA is not a preferred provider under any PPOs, its potential recoveries are reduced. Although it may be able to recover from PPOs by becoming a participating rather than preferred provider, it receives lower reimbursement. Finally, POS plans allow their policyholders to obtain care from any willing provider but typically require their members to pay a larger portion of the cost of services they obtain from providers outside of the plan, such as VA facilities. In other words, POS plans pay less of the billed charges when care is provided by an out-of-plan provider, expecting the member to pay the remainder. Nearly three-fourths of workers with employer-provided health insurance are now covered under a managed care plan, most by an HMO or PPO. In 1993, 49 percent of American workers with health insurance were covered by a conventional fee-for-service plan, but by 1995 that percentage had dropped to 27. By contrast, during the same time period, the percentage of workers covered under HMOs or PPOs increased from 42 percent to 53 percent; workers covered under POS plans increased from 9 to 20 percent. Even recoveries from Medicare supplemental policies may decrease because of the increased enrollment of Medicare beneficiaries in risk-contract HMOs. Between 1987 and 1996, enrollment in Medicare risk-contract HMOs increased from 2.6 percent to 10 percent of total Medicare beneficiaries, and by 2002 enrollment is projected to be 22.9 percent of total beneficiaries. Even now, physicians from VA medical centers in California, Florida, New Mexico, and other states have noted an increase in the number of elderly veteran patients who seek care at VA facilities while enrolled in HMOs. Two studies at individual VA facilities found that HMO enrollment ranged from 10 percent among veterans of all ages to about 25 percent among elderly veterans. Data from the West Los Angeles medical center suggest that its elderly veteran users are opting to enroll in Medicare HMOs rather than purchase Medigap insurance. For all VA facilities, approximately 1.5 percent of VA’s inpatient discharges and almost 2.5 percent of VA’s outpatient visits in fiscal year 1995 were provided to veterans enrolled in Medicare risk contracts. The growth in Medicare HMO enrollment is likely to affect VA recoveries for two primary reasons. First, VA is generally unable to recover any of its costs for providing care to veterans enrolled in Medicare HMOs. Second, increased enrollment in HMOs is accompanied by corresponding decreases in the number of beneficiaries covered under Medicare supplemental insurance, from which VA can attempt to recover. FEHBP enrollment is also shifting away from fee-for-service insurance toward managed care arrangements. In 1990, 26 percent of federal employees and annuitants chose to enroll in HMOs. By 1997, 29 percent of the FEHBP enrollees selected HMOs. In 1990, four fee-for-service plans offered significant preferred provider options within the structures of their plans. Under the plans, enrollees retain the freedom to choose providers but have lower out-of-pocket payments if they use preferred providers. By 1997, all fee-for-service plans within FEHBP identified themselves as “managed fee-for-service plans.” Most plans offered enrollees services through PPOs, and several offered significant POS products. The number of enrollees who use only the preferred providers in these hybrid plans is not measured. As veterans continue to shift from conventional fee-for-service health plans to HMOs, PPOs, and POS plans, VA recoveries will likely continue to decline unless VA facilities become preferred or participating providers. VA efforts to this end have, however, generally been unsuccessful, as discussed in the next section. Changes in how insurers process VA claims could result in refunds of over $600 million in overpayments and reduce VA’s future recoveries by over 20 percent. Specifically, some insurers claim they overpaid VA under Medicare carve-out policies and are seeking are increasingly reluctant to pay any portion of billed charges when the care was provided in a hospital unnecessarily, and increasingly use pharmacy benefit managers (PBM) to administer prescription benefits. A number of insurers maintain that they have overpaid VA claims under Medicare carve-out policies. Such policies differ from Medigap policies in that they offer the same health care benefits to both active employees and retirees but contain provisions making their coverage secondary to Medicare as the retirees become eligible for Medicare. Some insurers offering such carve-out policies have paid VA for services provided to their Medicare-eligible policyholders as the primary, rather than secondary, insurer. As a result, they are seeking refunds of millions of dollars in prior payments and are reducing current payments. VA’s position is that it is entitled to recover from a health plan to the same extent that the insurer would have been liable for the care if it was provided in the private sector. If VA determines, following a review of an insurer’s policy provisions, that the insurer overpaid VA under the terms of its policy by paying primary when the insurer would have had a secondary liability in the private sector, VA will refund timely and well-grounded claims. On the basis of a review of fiscal year 1995 data on potential overpayments, MCCR staff estimate that about 40 percent of the paid claims for veterans aged 65 and older were paid at an amount greater than the Medicare deductible and coinsurance. The MCCR staff estimated that overpayments to all VA medical centers in fiscal year 1995 were $110 million (+/- $35 million). Over the 6-year period of liability, refunds could amount to as much as $600 million. Other issues related to carve-out policies could also affect future VA recoveries. VA’s General Counsel determined that refunds for overpayments made during the current year must be charged against that year’s recoveries but that refunds of overpayments from prior years should come out of the Treasury. This approach does, however, involve certain risks: Insurers could offset overpayments from prior years against payments for current-year bills. One plan in Indiana has begun offsetting overpayments, although other plans appear willing to wait for refunds to be paid out of the Treasury. Allowing VA to authorize refunds from the Treasury gives the agency little incentive to protect the government’s interests in determining the appropriateness of refund requests. Aggressively reviewing refund requests could adversely affect current-year recoveries because staff would be diverted from billing for current services to verifying refund requests. Like some private sector carve-out policies, FEHBP plans have been paying VA as primary insurance for Medicare-eligible federal retirees. Officials in the Office of Personnel Management (OPM) have indicated that federal retirees’ health coverage will become secondary to Medicare when care is provided to veterans covered by Medicare in VA facilities, as of the 1998 benefit year. OPM officials have indicated that existing policy will be modified to implement this change prospectively and that FEHBP plans will not seek refunds from VA. Since VA included FEHBP payments for Medicare-eligible veterans in its estimate of amounts to be refunded, that estimate is overstated. VA could not indicate the extent that payment amounts from FEHBP plans were included in its refund estimate. This benefit change will cause VA’s future recoveries from FEHBP plans to decline. VA’s ability to obtain partial payment for care unnecessarily provided in an inpatient hospital is declining. As discussed earlier, the Martinsburg medical center and, to an increasing extent, the Washington, D.C., medical center, have been successful in obtaining partial payment from insurers for inpatient care that should have been provided in an outpatient clinic. At both medical centers, however, several major insurers have changed their policies and will no longer make such partial payments. VA’s ability to recover for prescription refills may be declining as plans’ benefit designs change and use of pharmacy benefit managers (PBM) increases. PBMs are companies that administer the prescription drug coverage of health insurance plans on behalf of plan sponsors, such as FEHBP plans, insurance companies, self-insured employers, and HMOs. Many PBMs offer a range of services to plan sponsors, such as processing prescription claims, operating mail order pharmacies, and developing networks of retail pharmacies to serve plan enrollees. The PBMs’ mail order and retail services provide enrollees prescription drugs at discounted prices. To take advantage of these discounts, the plans offer enrollees financial incentives to fill their prescriptions only through the PBMs’ mail order programs or participating network retail pharmacies. In 1989, PBMs managed prescription drug benefits for about 60 million people. Four years later, they were managing prescription drugs for about 100 million people, almost 40 percent of the U.S. population. By the end of 1995, about 58 percent of FEHBP enrollees were covered by a PBM. Because no VA medical centers or mail service pharmacies are participating providers under PBMs, VA is generally unable to obtain payment for prescription refills when veterans’ insurance plans contract with PBMs. In such cases, VA facilities may submit their bills to the health insurers for processing as outpatient claims. Changes in insurers’ copayment requirements for outpatient services, however, could further reduce VA recoveries. For example, the FEHBP Blue Cross and Blue Shield high-option plan will not pay the first $50 of outpatient charges submitted by nonpreferred providers such as VA facilities. As a result, VA, which bills $20 for a prescription refill regardless of type or amount of the drug provided, can no longer recover any of its costs of providing prescription refills from the Blue Cross plan unless it combines three or more refills into a single bill. Even though VA is not a participating provider, one PBM has been authorized by 30 of its 2,000 plan sponsors to process and pay VA’s bills for prescription refills. However, we also identified instances in which PBMs paid the insured veteran directly rather than the VA medical center, since VA is not a participating provider in the network. In such cases, VA has difficulty in getting veterans to forward the payments. As VA shifts more of its care from inpatient to outpatient settings, insurance recoveries decrease and the cost of recovery increases. VA has set goals to significantly reduce the amount of care provided in inpatient settings. For example, it has set goals to reduce the hospital bed-days of care provided per 1,000 unique users by 20 percent from the 1996 level, enroll 80 percent of users in primary care, and shift a large portion of surgeries to ambulatory care settings. VA has also implemented a new system for allocating resources to its networks—the Veterans Equitable Resource Allocation system—that is intended to eliminate the financial incentives previous allocation methods gave facilities to unnecessarily admit patients to hospitals and to encourage facilities to provide care in the most cost-effective setting. To the extent facilities respond to such performance measures and financial incentives, reimbursable inpatient care will decline and reimbursable outpatient care will increase. Under its current rate schedules, VA must generate approximately 20 outpatient bills to produce recoveries equivalent to one inpatient bill. In addition, because MCCR staff have had to review medical records to generate outpatient bills, it frequently costs more to generate an outpatient bill for about $200 than it does to generate an inpatient bill for thousands of dollars. Almost 40 percent of the funds VA recovers from private health insurance is for services provided to veterans with service-connected conditions. VA loses opportunities for additional recoveries, however, because of the nature of decisions as to what services are billable. Identifying and billing the cost of care provided to veterans with service-connected disabilities for treatment of their nonservice-related conditions is administratively cumbersome and often subjective. Because data on veterans’ service-connected disabilities are not always precise, it is often difficult for MCCR staff to determine whether the care provided was related to the service-connected disability. For instance, knee surgery provided to a veteran with a service-connected disability was found to be billable when the MCCR staff discovered that his service-connected condition was associated with injuries to his other leg. In addition, the ability of MCCR staff to differentiate between treatments for service- and nonservice-connected conditions depends on the quality of the documentation in the medical record and the cooperation of the physician and other clinical personnel involved in providing the care. For example, billable medical services provided to a veteran who has a service-connected condition relating to hypertension can be difficult to identify. Depending upon the documentation, MCCR staff may view an EKG provided to this veteran as billable and view a routine physical (which requires that the veteran’s blood pressure be checked) as unbillable. Recent legislation minimized insurers’ ability to exclude coverage for preexisting conditions. The Health Insurance Portability and Accountability Act of 1996 (P.L. 104-191) (HIPAA) prevents private health insurers from excluding payment for policyholders’ preexisting conditions for more than 12 months for conditions diagnosed or treated within 6 months before becoming insured. Although service-connected disabilities are preexisting conditions, the VA recovery program will not benefit from this change, because VA’s recovery authority does not allow it to bill health insurers for treatment related to a service-connected disability. Changing the statutory language in title 38 of the U.S. Code to authorize VA to recover its costs from private health insurance for treating service-connected conditions, consistent with the provisions of HIPAA, could, however, be viewed as shifting to the private sector the government’s obligation to provide care for veterans disabled during or as a result of their military service. On the other hand, authorizing such recoveries could generate significant additional revenues to be retained by VA for improving health care services for veterans. In addition, it could offset the incentives created by the Balanced Budget Act for VA facilities to target their services toward privately insured veterans with no service-connected disabilities. VA officials identified a number of legislative and management initiatives intended to address the previously mentioned factors and help it achieve its recovery goals. VA sought and was given legislative authority to (1) allow it to retain copayment and third-party recoveries and (2) extend the lapsing recovery provisions. Planned administrative actions include improving the process for identifying veterans’ insurance coverage; improving the process for submitting claims to Medicare supplemental insurers; developing new rate schedules that allow itemized billing; strengthening follow-up on claims denied or partially paid; negotiating provider agreements with HMOs and other managed care plans; strengthening efforts to ensure the medical appropriateness of VA care; and automating the capture of data on patient diagnoses, procedures, and providers. In addition, VA’s goal of increasing the number of veterans using the VA health care system by 20 percent should bring additional insured veterans into the system. It is not clear, however, whether these actions will allow VA to counteract the factors contributing to declining recoveries, let alone allow it to significantly increase future recoveries. Historically, facility directors have had little incentive to aggressively identify and pursue insurance recoveries because the funds, less the costs of operating the recovery program, were returned to the Treasury. Under the legislative proposal contained in its fiscal year 1998 budget submission, VA sought authority to keep all funds recovered from private health insurance. VA expects such authority to give VA facilities stronger incentives to identify veterans’ insurance coverage and aggressively pursue recoveries. They will also have stronger incentives to market their services toward such revenue-generating veterans rather than nonrevenue-generating veterans such as veterans without private health insurance. The Balanced Budget Act of 1997 authorized VA to retain recoveries from private health insurance and collections for veterans’ copayments after June 30, 1997. The second problem VA sought to address through legislation was the lapsing of its authority to recover its costs for providing health care services to veterans with service-connected disabilities for conditions unrelated to their service-connected disabilities. The Balanced Budget Act of 1997 subsequently extended the authority until September 30, 2002. With this legislation, VA expects to significantly increase recoveries for services provided to veterans with service-connected disabilities. By the year 2002, VA estimates that recoveries from private health insurance for services provided to veterans with service-connected conditions will increase to $253 million. Allowing VA to retain all insurance recoveries creates a strong incentive for VA facilities to classify more of the care provided to veterans with service-connected disabilities as unrelated to treatment of those disabilities. VA has identified three approaches for improving identification of veterans with private health insurance and estimates that these initiatives could lead to increased recoveries totaling nearly $200 million per year. However, VA appears to have overestimated the additional recoveries that are likely to be generated by the initiatives. Moreover, a fourth option for improving the identification of insurance coverage would be to include such information in the enrollment database being created as part of the implementation of eligibility expansions. The first approach is to obtain, through a Medicare contractor, information on Medicare-eligible veterans who have private health insurance coverage that is primary. MCCR is particularly interested in identifying Medicare-eligible veterans whose private health insurance is primary. MCCR estimated that 5.9 percent of the over-65 population treated by VA could be expected to have primary health insurance other than Medicare. The MCCR program further estimated that if its assumption is correct, potential recoveries from such veterans may total about $97 million. VA appears to overestimate the potential for additional recoveries under this initiative. There are two basic groups of Medicare beneficiaries for whom private health insurance is primary. The first group is beneficiaries who are over 65 and still working or have a spouse who is still working. Those Medicare beneficiaries still working are likely to be healthier and thus likely to use fewer health care services, including services from VA. The second large group of Medicare beneficiaries likely to have other primary health insurance consists of individuals who retired from state and local governments before April 1, 1986, or from the federal government before January 1983. In addition, VA does not know how many such veterans have already been identified. As discussed earlier, 60 percent of the veterans VA currently identifies as having private health insurance are over age 65. Accordingly, even if the estimate of the percentage of Medicare-eligible veterans with private health insurance that is primary is correct, the estimate of potential recoveries is overstated because it does not back out current recoveries. On the other hand, VA may understate the potential for additional recoveries resulting from matching VA and Medicare records because such a match could also be used to identify Medicare beneficiaries under 65 years of age who have private health insurance that is primary. VA’s 1992 National Survey of Veterans estimates that 23 percent of VA users under the age of 65 are covered by Medicare, and about a third of these veterans have private health insurance. MCCR, however, does not currently plan to use these data to identify private health insurance coverage for such veterans under the age of 65. The second approach MCCR tested for improving identification of insurance coverage was the use of a contractor to identify insurance coverage. In August 1995, VA provided Health Management Systems, Inc., the names and identifiers of 38,748 patients for whom VA facilities had no insurance information. The contractor, however, was able to identify only 649 matches with its insurance records. VA further determined that only 236, or 0.6 percent, of the records reviewed had billable insurance coverage. However, even with the limited identification of insurance coverage, the contract proved to be cost effective. The final approach was the institution of a preregistration process under which patients scheduled for outpatient visits within the next 10 days were contacted to remind them of their appointment and to request updated personal information, including employment and insurance data. On the basis of results of the pilot test, VA estimated that nationwide implementation of a preregistration process could result in an additional $100 million in recoveries annually from newly identified insured patients. It is not clear, however, whether the billable cases identified through the preregistration process would not otherwise have been identified. In other words, was preregistration a substitute for data-gathering efforts that would have taken place at the time of the visit? In addition, the preregistration process would also identify some insurance coverage that also would be identified under the first two methods, so the additional collections from the three approaches overlap and should not be fully added together. Implementation of VA’s health care enrollment process gives VA another option for capturing and updating veterans’ health insurance data. Public Law 104-262 expanded veterans’ eligibility for VA health care services and required VA to establish a system of enrollment. After September 30, 1998, veterans, other than those with service-connected disabilities rated at 50 percent or higher or seeking treatment for a service-connected disability, will not be able to obtain care from the VA health care system unless they have enrolled. Capturing insurance information during the enrollment process and including such data in the enrollment database could facilitate billing efforts. Information obtained at the time of enrollment and subsequent reenrollment could include the policy number and, upon request, a copy of the policy. By including other information, such as income and detailed information on adjudicated service-connected disabilities, MCCR staff could more readily identify billable insurance and prepare and process bills. The effectiveness of such a process would, however, continue to be dependent on (1) the willingness of veterans to give VA complete and accurate information on their insurance coverage, employers, and incomes and (2) the thoroughness of VA efforts to obtain and verify the information provided. VA data show that much of the information VA currently gathers is inaccurate—veterans fail to reveal their insurance coverage or underestimate their incomes in applying for VA health care. For example, the VA initiatives described indicate that VA is not currently obtaining complete and accurate information on insurance coverage. Similarly, only about 3 percent of veterans with no service-connected disabilities are identified through VA’s admission process as having incomes that place them in the discretionary care category. About 15 percent of veterans identified during the admission process as having incomes that place them in the mandatory care category, however, are subsequently identified through matches with income tax data as having incomes that might place them in the discretionary care category. Currently, VA’s only recourse when it determines that veterans knowingly provided false information in order to avoid copayments is to retroactively seek recovery of those copayments. A VA official told us that VA medical centers frequently waive such copayments. VA does not, however, maintain data on the extent to which such copayments are actually billed retroactively and recovered. The MCCR program is attempting to negotiate with HMOs and other managed care plans to enable VA facilities to become participating providers. HMOs, however, have little incentive to accept VA as a participating provider because, to the extent their enrollees obtain care from nonparticipating providers, HMOs’ costs are reduced and profits increased. The MCCR Business Plan proposes that VA consider a legislative proposal that would require HMOs to recognize VA as a preferred provider. VA currently has a contract with only one HMO—Dakota Care—in South Dakota. VA does not, however, view this contract as a model easily transferrable to other HMOs because the VA medical center is in a small state with limited health care options. VA has been negotiating with at least two other HMOs—U.S Healthcare in Philadelphia and HMO Illinois, a subsidiary of Blue Cross of Illinois—but, to date, discussions have not resulted in provider agreements. VA is having more success in negotiating provider agreements with POS plans. Unlike HMOs and PPOs that may be able to avoid all payments to VA (other than for emergency care) by excluding VA from their list of participating providers, POS plans have less to gain by refusing to accept VA as a participating provider. This is because a POS plan has an obligation to pay any willing provider for nonemergency care, including those without a provider agreement. Since February 1995, VA’s Office of General Counsel has reviewed and approved 32 provider agreements between VA facilities and managed care plans submitted by regional counsels and medical centers. Twelve of those agreements were signed; 5 agreements were closed with no further action, and 15 agreements remain open. Neither VA’s General Counsel nor the Veterans Health Administration maintained readily accessible information on the number and status of contracts submitted for headquarters review prior to February 1995. As a result, we could not determine how many provider agreements are in effect or whether they are for preferred or participating provider status. However, even in instances in which managed care plans are willing to accept VA as a participating provider, they may be unwilling to accept VA as a preferred provider. This distinction is particularly important to VA because being a participating provider essentially lowers VA recoveries. For example, the Washington, D.C., VA medical center has an agreement with Blue Cross of the National Capital Area as a participating, rather than preferred, provider. This means that veteran policyholders who use the Washington, D.C., medical center rather than a preferred provider are subject to higher copayments. These higher copayments essentially mean that the insurer pays less of the billed charges; thus VA recoveries are lower than they would be if VA was a preferred provider. Although the Washington, D.C., medical center is trying to become a preferred provider, Blue Cross of the National Capital Area has little incentive to allow VA to join its preferred provider network. For one thing, the VA medical center is surrounded by preferred providers in the Blue Cross network; as the following map indicates, 4 of the 12 hospitals that are preferred providers in Washington, D.C., are within a mile of the VA hospital. Moreover, because the Washington, D.C., medical center’s billing process differs from those of other hospitals, VA bills are perceived as more difficult and costly for the insurer to process. A number of other factors also affect VA’s ability to negotiate preferred provider status. First, to become a preferred provider under some plans, VA would be required to accept discounted payments. Historically, VA has not been allowed to negotiate discounted payments. Second, VA may be unwilling or unable to comply with the utilization management policies and standards insurers often impose as requirements for preferred provider status. The 1996 business plan for the MCCR program identified plans to address VA’s inability to recover from HMOs by seeking legislation requiring HMOs to include VA as a preferred provider. VA has taken no official position on the proposal contained in the MCCR business plan and has not estimated potential revenues from this initiative, but revenues could be substantial given the rapid increase in HMO enrollments. Such legislation, however, would essentially require HMOs to treat VA providers differently than they would other providers, raising questions of equity and fairness. A number of alternative approaches could be taken to ensure that government funds are not used to subsidize health plans unless the plan includes VA as a participating provider. For example, legislation could be enacted authorizing VA to (1) deny enrollment in the VA health care system to any veteran enrolled in a managed care plan unless that plan includes VA as a provider and (2) refuse to provide drugs to any veteran covered by PBMs unless the sponsoring health plan reimburses VA, or the plan’s PBM includes VA as a participating provider in the PBM’s pharmacy network. Similarly, in instances in which health plans send their payments to veterans rather than to VA and the veterans refuse to return the payments, VA could be authorized to deny veterans enrollment in the VA health care system or to recover the funds through an offset against other government benefits. Because they are directed at veterans rather than at health plans, such solutions would likely be viewed as reducing veterans’ benefits. VA actions aimed at providing care in the most cost-effective setting consistent with good patient care should increase the percentage of billed charges recovered, but would not necessarily increase overall recoveries. At the two facilities included in our review, however, preliminary results from the utilization reviews showed that most of the hospital admissions continue to be medically unnecessary. Nevertheless, further actions could be taken to strengthen utilization reviews or give physicians incentives to provide services in the most cost-effective setting. The Under Secretary for Health directed VA facilities to implement an inpatient utilization review program no later than September 30, 1996, to assess, monitor, and evaluate the appropriateness of hospital care provided. As part of that program, all scheduled acute admissions are to be assessed prospectively for the appropriateness of the level of care provided. Following admission, nurse-reviewers are to monitor the appropriateness of care through continuing stay reviews, that is, though periodic reviews of a patient’s care during the hospital stay. VA’s action addresses a long-standing problem with overutilization of acute-care beds and inpatient services identified by the VA Inspector General, VA researchers, and us. For example, a January 1996 study by VA researchers reported that about 40 percent of the admissions to acute medical and surgical services were assessed as nonacute; more than 30 percent of the days of care in the acute medical and surgical services of the VA hospitals reviewed were nonacute. VA’s action responded to our recommendation last year that it establish an independent, external preadmission certification program. Systemwide data on the effectiveness of the new utilization review program are not yet available. Data from the Martinsburg and Washington, D.C., VA medical centers, however, indicate that about 45 percent of the acute inpatient admissions and about 60 percent of the acute days of care reviewed in both facilities since the implementation of the utilization review program did not meet InterQual standards for acuity or intensity of care. In addition to implementing the utilization review program, the Martinsburg medical center established (1) a subacute pilot program that allows patients no longer needing acute care to be transferred to a special unit offering care that is less intensive, (2) a 23-hour observation unit to allow patients to be monitored without being admitted to the hospital, (3) a “hoptel” to provide temporary lodging for patients with transportation problems, and (4) a Preadmission Surgical Screening program through which preoperative tests are performed on an outpatient basis so that patients can be admitted the morning of surgery. In addition, daily reports on all nonacute admissions are given to the bed service chiefs, and a weekly utilization review activity report is provided to bed service chiefs and the chief of staff. These initiatives enabled Martinsburg medical center to decrease nonacute admissions to medical wards from 72 to 59 percent and nonacute admissions for surgical wards from 78 to 70 percent. The data from continuing stay reviews showed that nonacute days of care provided in medical wards decreased from 92 to 79 percent, and nonacute days of care provided in surgical wards decreased from 82 to 69 percent. Although these are important improvements, with well over half of admissions and days of care continuing to be nonacute, further actions appear warranted. For example, under the current utilization review program, neither the medical center nor the admitting physician suffer any financial consequences from ignoring the findings of the reviewer and admitting patients who could be cared for on an outpatient basis. Managed care plans also control the use of hospital care through physician incentives. These include profiling of physicians, preferred provider arrangements, and specific financial incentives. Through profiling, physicians are given specific data that compare their practice and admission patterns with those of other physicians. Profiling largely relies on peer pressure to achieve changes in practice patterns. VA’s MCCR program developed one form of profiling—a report indicating how many days of care were denied by health insurers for each attending physician. The report also shows the reasons for the insurer denials. It is not clear, however, how many facilities have implemented the report or whether the information is shared with the attending physicians. For example, the Martinsburg medical center produces the report and distributes it to the chief of Clinical Support, while the Washington, D.C., medical center does not produce the report. A second method managed care plans use to create physician incentives is through preferred provider arrangements. PPOs use physician profiling to identify cost-effective providers. Those whose practice patterns vary significantly from the norm are not accepted or not retained as preferred providers. Finally, many HMOs use specific financial incentives to encourage physicians to reduce hospital use. These incentives can range from financial arrangements, in which physicians are placed at risk for a portion of hospital costs, to bonuses if hospital use is kept below a certain level. Such financial incentives, however, carry with them an increased risk that physicians will overreact to the financial incentives and fail to admit patients in need of hospital care. VA has limited legislative authority to establish incentive pay provisions for physicians. Actions to reduce claim denials because of inappropriate medical care are largely beyond the control of the MCCR program. The MCCR program can continue to (1) observe insurers’ certification procedures and (2) negotiate for partial payments to the extent feasible, but it cannot resolve the core issue. VA also expects to increase recoveries by improving its process for submitting claims to Medicare supplemental insurers. As discussed earlier, VA has considerable and increasing difficulty in collecting from Medicare supplemental insurance, in part, because of VA’s inability to submit claims to insurers similar to the claims of Medicare providers that have accompanying remittance advice and explanation of benefits payment vouchers. The MCCR program is exploring the feasibility and costs associated with having a Medicare contractor prepare such documentation for veterans covered by Medicare who use VA facilities. VA has not estimated the potential increased recoveries from the initiative, but notes that the initiative is important to prevent further decreases in recoveries from Medicare supplemental policies. VA also expects to increase recoveries by developing new rate schedules that allow itemized billing. In the past, Veterans Health Administration has been limited to use of per diem and per-visit rates because of the lack of detailed cost and workload data from its accounting and information systems. As VA completes implementation of the Decision Support System and other improvements to its information and accounting systems, it proposes to implement new rate schedules to optimize third-party recoveries. As VA shifts from inpatient to outpatient care, the importance of developing a more detailed outpatient charge structure increases. Although many high-cost services, such as cataract surgery, are increasingly performed on an outpatient basis, under its current rate structure, VA can bill only $194 for an outpatient visit, regardless of the type and amount of services provided during the visit. To resolve this problem, the MCCR program is developing a procedure-specific rate schedule for outpatient physician services. These rates will be billed along with a facility charge. VA plans to implement the new rate structure in October 1997. Implementation of the new rates should help compensate for the decline in recoveries likely to accompany the shifting of care from inpatient to outpatient settings. The 1996 MCCR business plan also estimated that VA should see between a 15- and 25-percent increase in collections if it uses a diagnosis-related group (DRG) rate schedule for inpatient billing. Although the DRG rates are still being developed, VA no longer intends to implement DRG billings in fiscal year 1997. Rather, its efforts have turned to developing a rate schedule for inpatient physician services. Other proposed changes in billing rates are targeted for succeeding years, leading to implementation of locally developed itemized rates in fiscal year 2000. VA believes it can increase recoveries from currently billable insurance by strengthening follow-up on claims denied or partially paid. The business plan notes that some medical centers do not have utilization review coordinators adequately trained in third-party recoveries to facilitate requests for reconsideration of claims. The plan notes that some utilization review coordinators have successfully negotiated payments from insurers; it estimates that approximately 10 percent of denied claims could be overturned and recovery achieved through strong utilization review coordinators. Our work at the Martinsburg and Washington, D.C., medical centers confirmed that there is some potential to achieve additional recoveries through follow-up action. It is unclear, however, whether such actions would result in 10 percent of denied claims being overturned and recovery achieved through follow-up actions. As discussed earlier, for most denied claims, there is little, if any, recovery potential. The utilization review coordinator at Martinsburg was able to negotiate partial payments for many claims denied because of medical necessity, but such recoveries accounted for only a small portion of billed charges. While the Washington, D.C., medical center did not actively pursue partial payment during the time of our review, its ability to achieve the same success in obtaining partial payment from insurers depends on a number of factors, including the willingness of insurers to make partial payments. As discussed earlier, insurers are increasingly denying all payment for services unnecessarily provided in hospitals. VA also expects its efforts to automate the capture of data on patient diagnoses, procedures, and providers to increase collections and reduce recovery costs. Prior to April 1995, VA did not require its facilities to include such data for outpatient visits in any of its computer databases. As a result, the MCCR program had to manually review outpatient medical records in order to prepare insurance billings. In April 1995, the Under Secretary for Health changed Veterans Health Administration policy to require the capture of diagnosis, procedure, and provider data for all ambulatory care encounters and services. When fully implemented, the MCCR program estimates that the automated capture of encounter data will enable it to (1) utilize the automatic billing features of its integrated billing system and (2) eliminate staff positions comparable to 572 full-time-equivalent employees currently used to manually review and code data from patient medical records. Among the benefits from the data capture initiative identified by a VA contractor were improved identification of billable visits and increased reimbursement because of improved capture and reporting of procedures. These benefits would result from shifting the staff positions saved by eliminating manual review to improving identification of insurance coverage and follow-up on denied claims. The contractor estimated that using the positions to strengthen identification and follow-up would enable VA to generate about $100 million a year in additional outpatient recoveries. In its fiscal year 1998 budget submission, VA indicates that the automated capture of encounter data will also result in additional recoveries of $23 million in fiscal year 1997, increasing to $116 million in fiscal year 2002. Another of VA’s goals is to increase the number of VA users by 20 percent over the next 5 years. One way to meet its recovery projections would be to focus its marketing efforts on attracting veterans with fee-for-service private health insurance. VA officials told us that they do not know how many veterans in their 2.9 million patient base have insurance or how many insured veterans receive billable care. This lack of information on key elements affecting its projections creates considerable uncertainty about the number of new insured users it would need to attract or identify in order to generate its target revenues. VA’s General Counsel has determined that a portion of any payments received from a veteran’s private health insurance should be applied toward any copayments owed by the veteran, including both means test, per diem, and pharmacy copayments. While VA’s interpretation is understandable as it applies to Medicare supplemental insurance policies, it is more questionable to apply recoveries from primary insurance toward veterans’ copayments. In addition, as interpreted by VA’s General Counsel, the application of insurance recoveries to offset veteran copayments creates a significant administrative burden for MCCR staff and reduces overall third-party recoveries. Under Public Law 99-272, certain veterans, in order to become eligible for VA medical care, must agree to pay the lesser of the cost of that care or the so-called “means test” copayment. The copayment for inpatient hospital and nursing home care is based on the Medicare deductible, while the copayment for outpatient care is equal to 20 percent of the average cost of an outpatient visit. The means test copayments apply to veterans with no service-connected disabilities who have incomes above the means test threshold—$21,611 for a veteran with no dependents in 1997. Public Law 101-508, effective November 5, 1990, added additional cost-sharing requirements. First, it added per diem payments—$5 a day for nursing home care and $10 a day for hospital care—to the means test copayment. In addition, it created a new cost-sharing requirement for prescription drugs. All veterans—other than those receiving treatment for a service-connected condition, those with service-connected disabilities rated at 50 percent or higher, and those with incomes below the maximum VA pension level—are required to pay $2 for each 30-day supply of an outpatient prescription. The VA law is silent about the relationship between insurance recoveries and veteran copayments, and VA’s General Counsel provided guidance in 1990 on how the two recovery programs should interact. Specifically, the General Counsel opinion, as expanded through a 1996 reevaluation, provides that recoveries from Medicare supplemental insurance policies should be used first to satisfy veterans’ means test payments, per diem payments, and prescription copayments; and for non-Medicare supplemental insurance, recoveries are to be divided in equal proportions between VA and the veteran; in other words, if the insurer pays 80 percent of allowable charges, then insurance proceeds will be used to pay 80 percent of the veteran’s copayment after the veteran has satisfied any deductible imposed by the insurer. VA’s interpretation of recovery provisions as they apply to supplemental insurance follows from an assessment that Medicare supplemental insurance is specifically intended to pay policyholders’ deductibles and copayments and is purchased or provided expressly for that purpose. However, using funds insurers provide to VA to pay for veterans’ financial obligation when these insurance policies have established deductibles and copayments to discourage unnecessary use of health services is harder to defend. One of the primary arguments insurers made against the enactment of the law authorizing VA recovery from private health insurance was the lack of VA cost-sharing provisions to discourage inappropriate use of health care services. VA argues that it should be treated by insurance companies the same way any private sector hospital is treated. But private sector hospitals do not give a portion of the payment they receive from a patient’s health insurance to the patient. Although VA may not collect more than the cost of its services, insurers typically pay VA less than VA’s billed charges because the insurers reduce the payment in accordance with their cost-sharing provisions. Only in instances in which the combined insurance recoveries and copayments would exceed the cost of VA services would VA be compelled to apply insurance recoveries toward veterans’ copayments. The administrative burden of applying insurance recoveries toward veteran copayments, particularly for $2 prescription copayments, may be an issue as well. In 1994, VA estimated that it cost it $.38 for each $1 it collected under the pharmacy copayment program. With the added burden of offsetting insurance recoveries against prescription copayments, the administrative costs are likely to exceed recoveries for veterans with health insurance. This is because VA would typically be able to bill only $.40 of a $2 copayment after the offset. Although VA currently recovers less than a third of the amounts it bills to private health insurers, opportunities to recover more of its billed charges appear to be limited. The amounts that insurers deduct from their payments to VA generally reflect application of insurance policy provisions restricting payments for medically inappropriate care and setting policyholder cost-sharing requirements. In addition, some Medicare supplemental insurers contend that they have overpaid VA claims for years. They are reducing payments and seeking refunds for past overpayments. VA has set goals for its medical care cost recovery program that would require it to almost double recoveries from private health insurance over the next 5 years when VA’s estimates of past overpayments are considered. Because there is little potential to increase recoveries through current billings, the success of VA’s efforts depends largely on its ability to attract new users with private health insurance or improve its efforts to identify current users’ insurance coverage. VA’s ability to achieve its goals is uncertain considering the many factors likely to decrease future recoveries. Although VA has a number of initiatives planned and under way to address some of these factors and increase recoveries, it is not addressing other problems. For example, VA has not contacted state insurance commissions to obtain their help in removing exclusionary clauses in insurance policies that appear to preclude payment to VA; developed procedures to ensure that the time-consuming tasks associated with identifying, confirming, and returning overpayments are not performed at the expense of current billing activities; established mechanisms to provide its physicians with incentives to make appropriate use of VA hospitals; or developed adequate mechanisms for gathering complete and accurate information on veterans’ health insurance policies. Now that the Congress has authorized VA to retain health insurance recoveries, VA needs to develop procedures to ensure that such authority does not detract from services available to low-income veterans and veterans with service-connected conditions who have no health insurance. Allowing VA to retain insurance recoveries creates strong financial incentives for VA facilities to place a higher priority on serving insured rather than uninsured veterans. The statutes governing VA recoveries from private health insurance and veteran copayments do not clearly specify the relationship between the two provisions. In the absence of definitive guidance in the law, VA’s General Counsel has determined that insurance recoveries should be used to offset veterans’ copayment responsibilities. The effect of its interpretation is a reduction in overall cost recoveries, increased administrative expense, and reduced incentive for veterans to manage their use of health care services. The Congress may wish to consider clarifying the cost recovery provisions of title 38 of the U.S. Code to direct VA to collect means test copayments, per diem charges, and pharmacy copayments from patients regardless of any amounts recovered from private health insurance except in instances where the insurer pays the full cost of VA care. The identification of billable care provided to veterans with service-connected conditions is administratively cumbersome. Moreover, HIPAA prevents private health insurance from excluding payment for preexisting conditions for more than 12 months after the enrollment. The Congress may wish to take advantage of the provisions of HIPAA to authorize VA to recover the costs of service-connected treatments from private health insurance after the specified exclusionary period. A change in the statutory language in title 38 of the U.S. Code to authorize VA to recover from private health insurance its costs for providing treatment for service-connected conditions, consistent with the provisions of HIPAA, could, however, be viewed as shifting to the private sector the government’s obligation to care for veterans disabled during or as a result of their military service. On the other hand, now that VA retains recoveries from third-party insurers, this change could generate significant additional revenues for improving health care services for veterans. Moreover, it could offset the incentives created by the Balanced Budget Act for VA facilities to target their services toward privately insured veterans with no service-connected conditions. Finally, VA’s ability to increase recoveries is often hindered by incomplete and inaccurate information on veterans’ employers, incomes, and insurance coverage. Veterans, however, have little direct or indirect incentive to cooperate with VA recovery efforts. The Congress may wish to consider giving VA the authority to disenroll veterans from the VA health care system who knowingly provide VA incomplete or inaccurate data about their incomes, employers, or insurance coverage. We recommend that the Secretary of Veterans Affairs do the following: Establish procedures to work with state insurance commissions to ensure that exclusionary clauses inconsistent with VA’s recovery authority are removed from private health insurance policies. Work with the Director, OPM, to identify options for including VA facilities as preferred or participating providers under FEHBP plans, including HMOs and preferred provider plans. Design physician incentives to encourage appropriate use of hospital care. Such incentives should not, however, be so strong that they would result in denial of needed hospital care. In designing the enrollment process for the veterans’ health care program, develop procedures for gathering and updating detailed information on veterans’ employment, insurance, and service-connected disabilities. Assign adequate resources to MCCR activities to protect the government’s interest in resolving insurers’ requests for refunds of claimed overpayments. Develop procedures to ensure that authority to retain health insurance recoveries would not detract from services to veterans who lack private health insurance. We obtained comments on a draft of this report from the Acting Director of Medical Care Cost Recovery (MCCR) and other VA officials. The officials generally concurred with all but one of our recommendations. However, according to a Senior Management Analyst, Management Review and Administration, VA does not agree with our recommendation that it design physician incentives to encourage appropriate use of hospital care. She said that VA believes adequate incentives have already been established through the new Veterans Equitable Resource Allocation system and performance measures. Although the new allocation procedure and performance measures will give veterans integrated service networks and VA facilities greater incentives to provide appropriate care, we do not think that these initiatives will provide sufficient inducement for individual physicians to modify their practice patterns significantly. Existing efforts to reduce inappropriate inpatient care, such as VA’s recently implemented utilization review program, constitute a solid first step to addressing VA’s traditional reliance on institutional care. However, as indicated by the extent of the nonacute care that continues to be provided at the Martinsburg and Washington, D.C., facilities since the program’s inception, this effort may not be sufficient to address physicians’ lack of accountability for their treatment decisions. In our view, VA needs to develop incentives such as physician profiling or financial risk-sharing to encourage appropriate use of hospital care. The Acting Director emphasized that limited opportunities exist for VA to collect more of its billed charges and that the key to increased recoveries is improved identification of insurance coverage. He said that VA is pursuing a match with Medicare records that should help to identify private health insurance coverage of Medicare-eligible veterans. In a draft of the report, we recommended that the Secretary of Veterans Affairs work with the Director, OPM, to (1) determine the extent to which FEHBP plans overpaid VA for care provided to veterans who were covered by Medicare and the extent that overpayments should be refunded and (2) develop mutually beneficial changes in how FEHBP plans will reimburse VA for services provided to veterans covered by Medicare. In commenting on the report, VA officials indicated that they had relied on the Department of Justice to handle negotiations with OPM to discuss mutually beneficial changes. After follow-up discussions with OPM, we revised the report to indicate that FEHBP plans will pay VA facilities as secondary to Medicare for those veterans who are covered by Medicare, that this benefit change will occur prospectively, and that past payments will not be refunded. We have also deleted the associated recommendations. VA also provided several technical comments, which have been incorporated in the report as appropriate. We are sending copies of this report to appropriate congressional committees; the Secretary of Veterans Affairs; the Director, Office of Management and Budget; and other interested parties. We will also make copies available to others upon request. This report was prepared under the direction of Stephen P. Backhus, Director, Veterans’ Affairs and Military Health Issues. Please call Mr. Backhus at (202) 512-7116 if you or your staff have any questions. Other contributors to this report included Jim Linz, Sibyl Tilson, Mary Ann Curran, Lesia Mandzia, and Greg Whitney. Richard L. Hembra Assistant Comptroller General The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the Department of Veterans Affairs' (VA) efforts to recover from private health insurers the costs it incurs to provide health care services to veterans with no service-connected disabilities, focusing on: (1) those factors that limit VA's ability to recover more of its billed charges; (2) VA's ability to achieve its revenue targets by identifying factors that could decrease future recoveries and assessing the potential for VA initiatives to increase medical care cost recoveries; and (3) the way VA applies insurance payments to veterans' copayment liability for veterans in the discretionary care category. GAO noted that: (1) attaining VA's goal to increase recoveries from private health insurance from $495 million in fiscal year (FY) 1996 to $826 million in FY 2002 will be difficult; (2) for GAO's sample, most of the charges VA was unable to recover for bills submitted to private health insurers were appropriately denied or reduced by the insurers; (3) recoveries from private health insurance dropped for the first time in FY 1996 and have continued to drop during FY 1997; (4) several factors help explain the decreases and suggest that further decreases are likely, including: (a) the declining and aging of the veteran population, meaning that VA must serve a greater proportion of veterans to maintain its current workload and that more VA users will have secondary, rather than primary, health insurance coverage in the future; (b) veterans' increased enrollment in health maintenance organizations (HMOs) and other managed care plans, and decreased enrollment in fee-for-service plans, which reduces the number of veterans covered by insurance from which VA can reasonably expect to recover; (c) changes in how insurers process VA claims that could result in refunds to insurers of overpayments that VA estimates exceeded $600 million and could reduce future recoveries by over 20 percent; and (d) shifts in care from inpatient to outpatient settings that, while both needed and appropriate, could reduce private insurance recoveries and increase recovery costs; (5) VA has a number of initiatives to address some of these problems and to help it attain its recovery goals; (6) these include legislation to: (a) allow VA to retain recoveries from private health insurance and veteran copayments as an incentive to improve the identification and pursuit of recoveries; and (b) extend lapsing authority to recover the costs of services provided to veterans for conditions unrelated to their service-connected disabilities; (7) VA's initiatives would address some, but not all, of the factors affecting future recoveries; (8) however, considerable uncertainty remains about VA's ability to achieve its revenue goal; (9) VA was unable to provide an analytical basis for its recovery projections; (10) projected increases in VA's future recoveries were not supported by or attributed to improvements related to its planned initiatives; and (11) VA's General Counsel interprets the relationship between recoveries from private health insurance and veterans' copayments as requiring that a portion of insurance recoveries to be used to reduce veterans' copayment obligations.
Prior to enactment of the Food and Drug Administration Modernization Act of 1997 (FDAMA), which first established incentives for conducting pediatric drug studies in the form of additional market exclusivity, few drugs were studied for pediatric use. As a result, there was a lack of information on optimal dosage, possible side effects, and the effectiveness of drugs for pediatric use. For example, while physicians typically had determined drug dosing for children based on their weight, pediatric drug studies conducted under FDAMA showed that in many cases this was not the best approach. To continue to encourage pediatric drug studies, BPCA was enacted on January 4, 2002, just after the pediatric exclusivity provisions of FDAMA expired on January 1, 2002. BPCA reauthorized and enhanced the pediatric exclusivity provisions of FDAMA. Like FDAMA, BPCA allows FDA to grant drug sponsors pediatric exclusivity—6 months of additional market exclusivity—in exchange for conducting and submitting reports on pediatric drug studies. The goal of the program is to develop additional health information on the use of such drugs in pediatric populations so they can be administered safely and effectively to children. This incentive is similar to that provided by FDAMA; however, BPCA provides additional mechanisms to provide for pediatric studies of drugs that drug sponsors decline to study. The process for initiating pediatric studies under BPCA formally begins when FDA issues a written request to a drug sponsor to conduct pediatric drug studies for a particular drug. FDA may issue a written request after it has reviewed a proposed pediatric study request from a drug sponsor, in which the drug sponsor describes the pediatric drug study or studies it proposes doing in return for pediatric exclusivity. In deciding whether to approve the proposed pediatric study request and issue a written request, FDA must determine if the proposed studies will produce information that may result in health benefits for children. Alternatively, FDA may determine on its own that there is a need for more research on a drug for pediatric use and issue a written request without having received a proposed pediatric study request from the drug sponsor. A written request outlines, among other things, the nature of the pediatric drug studies that the drug sponsor must conduct in order to qualify for pediatric exclusivity and a time frame by which those studies should be completed. When a drug sponsor accepts the written request and completes the pediatric drug studies, it submits reports to FDA describing the studies and the study results. BPCA specifies that FDA generally has 90 days to review the study reports to determine whether the pediatric drug studies met the conditions outlined in the written request. If FDA determines that the pediatric drug studies conducted by the drug sponsor were responsive to the written request, it will grant a drug pediatric exclusivity regardless of the study findings. Figure 1 illustrates the process under BPCA. To further the study of drugs when drug sponsors decline a written request, BPCA includes two provisions that did not exist under FDAMA. First, if a drug sponsor declines to conduct the pediatric drug studies requested by FDA for an on-patent drug, BPCA provides for FDA to refer the study of that drug to FNIH, which might then agree to fund the studies. Second, if a drug sponsor declines a request to study an off-patent drug, BPCA provides for referral of the study to NIH for funding. FDA cannot extend pediatric exclusivity in response to written requests for any drugs for which the drug sponsor declined to conduct the requested pediatric drug studies. When drug sponsors decline written requests for studies of on-patent drugs, BPCA provides for FDA to refer the study of those drugs to FNIH for funding, when FDA believes that the pediatric drug studies are still warranted. FNIH, which was authorized by Congress to be established in 1990, is guided by a board of directors and began formal operations in 1996 to support the mission of NIH and advance research by linking private sector donors and partners to NIH programs. Although FNIH is a nonprofit corporation that is independent of NIH, FNIH and NIH collaborate to fund certain projects. FNIH has raised approximately $300 million from the private sector over the past 10 years to support four general types of projects: (1) research partnerships; (2) educational programs and projects for fellows, interns, and postdoctoral students; (3) events, lectures, conferences, and communication initiatives; and (4) special projects. Included in these funds is $4.13 million that FNIH raised as of December 2005 to fund pediatric drug studies under BPCA. The majority of FNIH’s funds are restricted by donors for specific projects and cannot be reallocated. In recent years, appropriations of $500,000 were authorized to FNIH annually. To further the study of off-patent drugs, NIH—in consultation with FDA and other experts—develops a list of drugs, including off-patent drugs, which the agency believes are in need of study in children. NIH lists these drugs annually in the Federal Register. FDA may issue written requests for those drugs on the list that it determines to be most in need of study. If the drug sponsor declines or fails to respond to the written request, NIH can contract for, and fund the conduct of, the pediatric drug studies. These pediatric drug studies could then be conducted by qualified universities, hospitals, laboratories, contract research organizations, federally funded programs such as pediatric pharmacology research units, other public or private institutions or individuals. Drug sponsors generally decline written requests for off-patent drugs because the financial incentives are considerably limited. (See app. II for a description of federal efforts to encourage research on drugs for children less than 1 month of age and app. III for NIH efforts to support pediatric drug studies.) Pediatric drug studies often reveal new information about the safety or effectiveness of a drug, which could indicate the need for a change to its labeling. Generally, the labeling includes important information for health care providers, including proper uses of the drug, proper dosing, and possible adverse effects that could result from taking the drug. FDA may determine that the drug is not approved for use by children, which would be reflected in any labeling changes. According to FDA officials, in order to be considered for pediatric exclusivity, a drug sponsor typically submits results from pediatric drug studies in the form of a “supplemental new drug application.” BPCA specifies that study results, when submitted as part of a supplemental new drug application, are subject to FDA’s performance goals for a scientific review, which in this case is 180 days. FDA’s processes for reviewing study results submitted under BPCA for consideration of labeling changes are not unique to BPCA. These are the same processes the agency would use to review any drug study results in consideration of labeling changes. FDA’s action on the application can include approving the application, determining that the application is approvable (pending the submission of additional information from the sponsor), or determining that the application is not approvable. If studies demonstrate that an approved drug is not safe or effective for pediatric use, this information would be reflected in the drug’s labeling. With a determination that the application is approvable, FDA communicates to the drug sponsor that some issues need to be resolved before the application can be approved and describes what additional work is necessary to resolve the issues. This might require that drug sponsors conduct additional analyses. However, this communication would complete the scientific review cycle. When a drug sponsor resubmits the application with the additional analyses, a new scientific review cycle begins. As a result, multiple scientific review cycles might be necessary, increasing the time between initial submission of the application, which includes the pediatric study reports, and approval of a labeling change. If, during FDA’s review of the study report submitted as part of the application, the agency determines that the application is approvable and the only unresolved issue is labeling, FDA and the drug sponsor must attempt to reach agreement on labeling changes within 180 days after the application is submitted to FDA. If FDA and the drug sponsor cannot reach agreement, FDA must refer the matter to its Pediatric Advisory Committee, which would convene and provide recommendations to the Commissioner on the appropriate changes to the drug’s labeling. The Commissioner would then consider the committee’s recommendations in making the final determination on the proper labeling. Most of the on-patent drugs for which FDA requested pediatric drug studies under BPCA were being studied, but no studies resulted when the requests were declined by drug sponsors. Of the 214 on-patent drugs for which FDA requested pediatric drug studies from January 2002 through December 2005, drug sponsors agreed to study 173 (81 percent). Of the 41 on-patent drugs that drug sponsors declined to study, FDA referred 9 to FNIH for funding and the foundation had not funded any of those studies as of December 2005. From January 2002 through December 2005, FDA issued 214 written requests for on-patent drugs to be studied under BPCA, and drug sponsors agreed to conduct pediatric drug studies for 173 (81 percent) of those. The remaining 41 written requests were declined. (See app. IV for details about the study of off-patent drugs under BPCA and app. V for a detailed description of the status of all written requests issued by FDA.) Drug sponsors completed pediatric drug studies for 59 of the 173 accepted written requests—studies for the remaining 114 written requests were ongoing—and FDA made a pediatric exclusivity determination for 55 of those through December 2005. Of those 55 written requests, 52 (95 percent) resulted in FDA granting pediatric exclusivity. Figure 2 shows the status of written requests issued under BPCA for the study of on-patent drugs, from January 2002 through December 2005. (See app. VI for a description of the complexity of pediatric drug studies conducted under BPCA.) Under BPCA, when a written request to study an on-patent drug is declined, the study of the drug may be referred to FNIH. However, FNIH is limited in its ability to fund drug studies by its available funds. Through December 2005, drug sponsors declined written requests issued under BPCA for 41 on-patent drugs. FDA referred 9 of these 41 written requests (22 percent) to FNIH for funding. FNIH had not funded the study of any of these drugs. NIH has estimated that the cost of studying the drugs that were referred to FNIH for study would exceed $43 million (see table 1). FNIH has been raising funds for the study of drugs referred under BCPA at a rate of approximately $1 million per year. Most drugs—about 87 percent—that have been granted pediatric exclusivity under BPCA have had labeling changes as a result of the pediatric drug studies conducted under BPCA. Pediatric drug studies conducted under BPCA showed that children may have been exposed to ineffective drugs, ineffective dosing, overdosing, or side effects that were previously unknown. However, the process for reviewing study results and completing labeling changes was sometimes lengthy, particularly when FDA required additional information to support the changes. Of the 52 drugs studied and granted pediatric exclusivity under BPCA from January 2002 through December 2005, 45 (about 87 percent) had labeling changes as a result of the pediatric drug studies. FDA officials told us that labeling changes were not made for the remaining 7 (about 13 percent) drugs granted pediatric exclusivity, generally because data provided by the pediatric drug studies did not support labeling changes. In addition, 3 other drugs had labeling changes prior to FDA making a decision on granting pediatric exclusivity. FDA officials said these labeling changes were made prior to determining whether pediatric exclusivity should be granted because the pediatric drug studies provided important safety information that should be reflected in the labeling without waiting until the full study results were submitted or pediatric exclusivity was determined. Pediatric drug studies conducted under BPCA have shown that the way that some drugs were being administered to children potentially exposed them to an ineffective therapy, ineffective dosing, overdosing, or previously unknown side effects—including some that affect growth and development. The labeling for these drugs was changed to reflect these study results. Table 2 shows some of these drugs and illustrates these types of labeling changes. FDA officials said that the agency has been working to increase the amount of information included in drug labeling, particularly when pediatric drug studies indicate that an approved drug may not be safe or effective for pediatric use. Other drugs have had labeling changes indicating that the drug may be used safely and effectively by children in certain dosages or forms. Typically, this resulted in the drug labeling being changed to indicate that the drug was approved for use by children younger than those for whom it had previously been approved. In other cases, the changes reflected a new formulation of a drug, such as a syrup that was developed for pediatric use, or new directions for preparing the drug for pediatric use were identified during the pediatric drug studies conducted under BPCA. (See table 3 for examples of drugs with this new type of information.) Although FDA generally completed its first scientific review of study results submitted as a supplemental new drug application—including consideration of labeling changes—within its 180-day goal, the process for completing the review, including obtaining sufficient information to support and approve labeling changes, sometimes took longer. For the 45 drugs granted pediatric exclusivity that had labeling changes, it took an average of almost 9 months after study results were first submitted to FDA for the sponsor to submit and the agency to review all of the information it required and agree with the drug sponsor to approve the labeling changes. For 13 drugs (about 29 percent), FDA completed this scientific review process and FDA approved labeling changes within 180 days. It took from 181 to 187 days to complete the scientific review process and to approve labeling changes for 14 drugs (about 31 percent). For the remaining 18 drugs (about 40 percent), it took from 238 to 1,055 days for FDA to complete the scientific review process and approve labeling changes. For 7 of those drugs, it took more than a year to complete the scientific review process and approve labeling changes. To determine whether and how drug labeling should be changed, FDA conducts a scientific review of the study results that are submitted to the agency by the drug sponsor. Included with the study results is the drug sponsor’s proposal for how the labeling should be changed. FDA can either accept the proposed wording or propose alternative wording. For some drugs, however, the process does not end with FDA’s first scientific review. While the first scientific reviews were generally completed within 180 days, for the 18 drugs that took 238 days or more, FDA determined that it needed additional information from the drug sponsors in order to be able to approve the applications. This often required that the drug sponsors conduct additional analyses or pediatric drug studies. FDA officials said they could not approve any changes to drug labeling until the drug sponsors provided this information. When FDA completed its review of the information that was originally submitted and requested additional information from the drug sponsors, the initial 180-day scientific review ended. A new 180-day scientific review began when the drug sponsors submitted the additional information to FDA. Drug sponsors sometimes took as long as 1 year to gather the additional necessary data and respond to FDA’s requests. This time did not count against FDA’s 180-day goal to complete its scientific review and approve labeling changes because a new 180-day scientific review begins after the required information is submitted. However, we counted the total number of days between submission of study reports and approval of labeling changes. FDA considers itself in conformance with its review goals even though the entire process may take longer than 180 days. BPCA provides a dispute resolution process to be used if FDA and the drug sponsor cannot reach agreement on labeling changes within 180 days of when FDA received the application and the only issue holding up FDA approval is the wording of the drug labeling. However, FDA officials said they have never used this process because labeling has never been the only unresolved issue for those applications whose review period exceeded 180 days. Agency officials told us that the possibility of referral to the Pediatric Advisory Committee facilitates its negotiations with drug sponsors on labeling changes because it is something that drug sponsors want to avoid. Reminding the drug sponsors that such a process exists has motivated drug sponsors to complete labeling change negotiations by reaching agreement with FDA. (See app. VII for a discussion of strengths of BPCA identified by FDA and NIH, as well as suggestions for ways to improve BPCA.) Drugs were studied under BPCA for their safety and effectiveness in treating children for a wide range of diseases, including some that are common, serious, or life threatening. We found that the drugs studied under BPCA represented more than 17 broad categories of disease. The category that had the most drugs studied under BPCA was cancer, with 28 drugs. In addition, there were 26 drugs studied for neurological and psychiatric disorders, 19 for endocrine and metabolic disorders, 18 related to cardiovascular disease—including drugs related to hypertension, and 17 related to viral infections. Written requests for some types of drugs were more frequently declined by the drug sponsor than others. For example, 36 percent of written requests for pulmonary drugs and 41 percent of written requests for drugs that treat nonviral infection were declined. In contrast, 19 percent of written requests were declined overall. Some of the drugs studied under BPCA were for the treatment of diseases that are common, including those for the treatment of asthma and allergies. Analysis of two national databases shows that about half of the 10 most frequently prescribed drugs for children were studied under BPCA. Based on a survey of prescriptions written by physicians in 2004, 4 of the 10 drugs most frequently prescribed for children were studied under BPCA. A survey of families and their medical providers in 2003 found that 5 of the 10 drugs most frequently prescribed for children were studied under BPCA. In addition, several of the drugs studied under BPCA were for the treatment of diseases that are serious or life threatening to children, such as hypertension, cancer, HIV, and influenza. Table 4 provides information on some of the drugs studied for pediatric use and what is known about the diseases that are relevant to children. Some of the drugs were studied under BPCA to treat complicating conditions in children who had other diseases, while others treated rare diseases. For example a drug was studied for the treatment of painful bladder spasms in children who have spina bifida. Other drugs were studied to treat overactive bladder symptoms in children with spina bifida and cerebral palsy, to treat children who require chronic pain management because of severe illnesses such as cancer, and to treat partial seizures and epilepsy in children who require more than one drug to control seizures. About 12 percent of the 52 drugs that were granted pediatric exclusivity under BPCA were studied for the treatment of rare diseases, including certain types of leukemia, juvenile rheumatoid arthritis, and narcolepsy. HHS provided written comments on a draft of this report, which we have reprinted in appendix VIII. HHS stated that the draft report provided a significant amount of data and analysis and generally explains the BPCA process. HHS also made four general comments. First, HHS commented that the report does not sufficiently acknowledge the success of BPCA. HHS noted that BPCA provides additional incentives for the study of on- patent drugs, a process for the study of off-patent drugs, a safety review of all drugs granted pediatric exclusivity, and the public dissemination of information from pediatric studies conducted. HHS concluded that BPCA has generated more clinical information for the pediatric population than any other legislative or regulatory effort to date. Second, HHS commented that the report confuses FDA’s process for reviewing reports of drug studies conducted under BPCA with time frames for the labeling dispute resolution process outlined in BPCA. HHS suggested that we did not sufficiently acknowledge that some of the time it takes for FDA to approve labeling changes includes time spent by sponsors collecting and submitting additional information. Third, in commenting on our finding that few written requests included neonates, HHS pointed out that written requests for 9 drugs required the inclusion of “newborns” and written requests for 13 drugs required the inclusion of infants (children under 4 months of age). Fourth, HHS commented that we failed to mention that exclusivity attaches to patents as well as existing market exclusivity. We believe that the draft report sent to HHS for comment accurately and adequately addressed each of the four issues upon which HHS commented. An explicit discussion of the overall success of BPCA was outside the scope of this report, as directed by the BPCA mandate and as discussed with the committees of jurisdiction. Nevertheless, the draft report extensively discussed HHS accomplishments such as the number of studies conducted, the number and importance of labeling changes that FDA approved, and the wide range of diseases, including some that are common, serious, or life threatening to children, for which drugs were studied. In drafting our report we believe we clearly distinguished between FDA’s goals for completing its review and approval of drug applications and the time frames mandated for using the labeling dispute resolution process as outlined in BPCA. In finding that the process for approving labeling changes is lengthy, we clearly stated that the process included time spent during FDA’s initial review as well as time drug sponsors took to respond to FDA’s requests for additional information, which was as long as 1 year. We also acknowledged that FDA completed its initial review of applications within its 180-day goal. We stated in the draft that FDA has never used the dispute resolution process because labeling has never been the only issue preventing FDA’s approval of a label for more than 180 days. Nevertheless, we have included additional language in this report to further clarify the distinction between FDA’s review process for pediatric applications and labeling dispute resolution. Our draft clearly stated that while written requests issued under BPCA required the inclusion of neonates, the majority of those on-patent written requests—32 of 36—had been first issued under FDAMA. It is therefore not appropriate to attribute the inclusion of neonates in these written requests to BPCA. Further, we included in our count of written requests requiring the inclusion of neonates the 9 written requests that HHS referred to in its comments as requiring the inclusion of newborns. We did not specifically include in our counts the other 13 written requests mentioned in HHS’s comments. According to data provided by FDA, 1 of these written requests was not issued under BPCA, and 2 others were counted among the 9 mentioned above. The remaining 10 written requests were not specifically included in our counts, because the written requests were first issued prior to BPCA and do not specifically require the inclusion of neonates. The written requests to which HHS referred in its comments required the inclusion of very young children, age 0-4 months. Our draft report had indicated that written requests requiring the inclusion of young children might produce data about neonates. Our draft report included language that indicated the conditions under which pediatric exclusivity applies. We added language to the report to further clarify the conditions under which pediatric exclusivity can be granted. HHS provided technical comments which we incorporated as appropriate. HHS also stated that many of the oral comments provided by FDA were not reflected in the draft report sent to HHS for formal comment. Some of FDA’s suggested revisions and comments were outside the scope of the report and in some instances we chose to use alternative wording to that suggested by FDA for readability and consistency. As we did with HHS’s general and technical comments on this report, we previously incorporated FDA’s oral comments as appropriate. We are sending copies of this report to the Secretary of Health and Human Services, appropriate congressional committees, and other interested parties. We will also make copies available to others upon request. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you have any questions about this report, please contact me at (202) 512-7119 or crossem@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IX. In this report, we (1) assessed the extent to which pediatric drug studies were being conducted for on-patent drugs under the Best Pharmaceuticals for Children Act (BPCA), including when drug sponsors declined to conduct the studies; (2) evaluated the impact of BPCA on labeling of drugs for pediatric use and the process by which the labeling was changed; and (3) illustrated the range of diseases treated by the drugs studied under BPCA. Our review focused primarily on those on-patent drugs for which written requests were issued or reissued by the Department of Health and Human Services’ (HHS) Food and Drug Administration (FDA) from January 2002, when BPCA was enacted, through December 2005. Actions taken on these drugs after December 2005 (such as a determination of pediatric exclusivity or a labeling change) were not included in our review. In addition, we reviewed some summary data available about the number of written requests issued under the Food and Drug Administration Modernization Act of 1997 (FDAMA) from January 1998 through December 2001. We also reviewed pertinent laws, regulations, and legislative histories. To assess the extent to which pediatric drug studies were being conducted for on-patent drugs under BPCA, including when the drug sponsors declined to conduct the studies, we identified written requests issued for on-patent drugs from January 2002 through December 2005, and determined which of those were declined by drug sponsors. We also reviewed data provided by FDA on the nature of the pediatric drug studies that were conducted in response to the written requests issued under BPCA. We also examined notices published in the Federal Register, identifying the drugs designated by HHS’s National Institutes of Health (NIH) as most in need of study in children. We reviewed data provided to us by the Foundation for the National Institutes of Health (FNIH)—a nonprofit corporation independent of NIH—about funding for pediatric drug studies of on-patent drugs. We interviewed officials from FDA, NIH, and FNIH to understand the processes by which pediatric drug studies are prioritized by the agencies, written requests are issued, drug sponsors respond to written requests, study results are submitted to FDA, and pediatric exclusivity determinations are made. We also reviewed background material describing the role of FNIH in supporting research on children and the funding available for such research. To evaluate the impact of BPCA on the labeling of drugs for pediatric use and the process by which the labeling was changed, we reviewed data provided to us by FDA summarizing the changes made from January 2002 through December 2005 for drugs studied under BPCA. We also used the dates that the changes were approved in order to calculate how long it took for FDA to approve labeling changes. We interviewed officials from FDA about the process by which FDA approves labeling changes as well as the reasons why some drugs did not have labeling changes. To illustrate the range of diseases treated by the drugs studied under BPCA, we reviewed data provided by FDA about the disease each drug was proposed to treat. We also examined data from the Medical Expenditure Panel Survey—administered by the Agency for Healthcare Research and Quality—and the National Ambulatory Medical Care Survey—administered by the National Center for Health Statistics—to assess the extent to which the drugs studied under BPCA were prescribed to children. To obtain other information that is provided in appendixes to this report, we collected and analyzed a variety of data from FDA, NIH, and FNIH about written requests and pediatric studies for both on- and off-patent drugs. To obtain a broad perspective on the many issues addressed in our report, we also interviewed representatives of the pharmaceutical industry and health advocates—such as representatives of the American Academy of Pediatrics, the Pharmaceutical Research and Manufacturers of America, the Generic Pharmaceutical Association, the National Organization of Rare Disorders, Public Citizen, the Elizabeth Glaser Pediatric AIDS Foundation, and the Tufts Center for the Study of Drug Development. We evaluated the data used in this report and determined that they were sufficiently reliable for our purposes. We conducted our work from September 2005 through March 2007 in accordance with generally accepted government auditing standards. FDA and NIH have engaged in efforts to increase the inclusion of neonates—children under the age of 1 month—in pediatric drug studies. As part of its encouragement of pediatric studies in general, BPCA identified neonates as a specific group to be included in studies, as appropriate. An examination of the written requests revealed that only 4 of 36 written requests for on-patent drugs first issued under BPCA required the inclusion of neonates. Further, no written requests for on-patent drugs and only two written requests for off-patent drugs have required the inclusion of neonates since FDA and NIH held a workshop that began their major initiative in this regard in 2004. In 2003, NIH conducted three workshops focused on increasing the inclusion of neonates in pediatric drug studies and discussing diseases that affect neonates. In September 2003, NIH staff met to discuss drug studies in neonatology and pediatrics with special emphasis placed on ways to better apply current knowledge in future pediatric drug studies. Two months later, NIH met with a group of experts to discuss the use of the drug dobutamine—used to treat low blood pressure—in neonates. NIH ended 2003 with a 1-day seminar designed to address parental attitudes toward neonatal clinical trials. FDA and NIH have collaborated to develop the Newborn Drug Development Initiative (NDDI), a multiphase program intended to identify gaps in knowledge concerning neonatal pharmacology and pediatric drug study design and to explore novel designs for studies of drugs for use by neonates. The NDDI is intended to consist of a series of meetings that will help frame state-of-the-art approaches and research needs. After forming various discussion groups in February 2003, the agencies held a workshop in March 2004 to help frame issues and challenges associated with designing and conducting drug studies with neonates. The workshop addressed ethical issues and drug prioritization in four specialty areas: pain control, pulmonology (the study of conditions affecting the lungs and breathing), cardiology (the study of conditions affecting the heart), and neurology (the study of disorders of the brain and central nervous system). For example, participants in the pain control group reviewed data demonstrating that neonates who undergo multiple painful procedures and receive medication to treat pain may differ in their development of pain receptors compared to those who do not undergo such procedures and treatment. FDA officials said that FDA would apply the findings from the NDDI workshop to written requests for pediatric drug studies in the four specialty areas. NIH officials said that the Pediatric Formulations Initiative is a related effort. They said that both initiatives are long-standing activities that engage in various efforts to enhance information dissemination to improve all pediatric drug studies. According to NIH officials, these initiatives have resulted in numerous publications. FDA and NIH efforts to increase the inclusion of neonates in pediatric drug studies conducted under BPCA have been limited. Through 2005, 9 of 16 (56 percent) written requests for off-patent drugs required the inclusion of neonates in the pediatric drug studies. NIH is currently funding pediatric drug studies for four of these written requests. Similarly, 36 of 214 (17 percent) written requests for the study of on-patent drugs issued from January 2002 through December 2005 included a requirement to study neonates, but only 4 of those 36 (11 percent) were first issued under BPCA. The remaining 32 (89 percent) written requests were originally issued under FDAMA, which did not place an emphasis on the inclusion of neonates in pediatric drug studies. Further, all of the written requests requiring the inclusion of neonates were issued in 2003, prior to the NDDI. Further, only two of the written requests for off-patent drugs were issued after the NDDI, and studies for neither of those have been funded. According to information provided by FDA, no written requests for on-patent drugs issued from January 2004 through December 2005 required the inclusion of neonates. FDA officials indicated, however, that they receive information about neonates in response to written requests that do not specifically target them. According to these officials, many written requests require that children from birth through 2 years of age be studied. These pediatric drug studies therefore may include neonates. In addition, inclusion of neonates in some studies may not be appropriate for medical or ethical reasons. BPCA was designed in part to increase pediatric drug studies through federal efforts. NIH has engaged in several efforts to support pediatric drug studies since the passage of BPCA. While NIH plays an important role in providing funding for research for children, the amount provided by NIH to support such activities has not increased significantly under BPCA. Since the enactment of BPCA, NIH funding for children’s research has increased from $3.1 billion in fiscal year 2003 to $3.2 billion in fiscal year 2005. These figures represent about 11 percent of NIH’s total budget each year from 2003 through 2005. The research funds for children were distributed by most of NIH’s 28 institutes, centers, and offices. For example, in 2005, 24 of these institutes, centers, and offices funded research on children. One institute, the National Institute of Child Health and Human Development, was responsible for about 26 percent of funding for pediatric research—the largest proportion of NIH’s research funding for children. This institute organizes study design teams with FDA and other relevant NIH institutes, conducts contracting activities, and modifies drug labeling for specific ages and diseases. The number of pediatric pharmacology research units—initiated by NIH— devoted to studies for children has remained the same under BPCA. NIH provides about $500,000 annually to each of these research units to provide the infrastructure for independent investigators to initiate and collaborate on studies and clinical trials with private industry and NIH. The number of such research units grew from 7 in 1994 to 13 in 1999 to support the infrastructure for collaborative efforts of pharmacologists to conduct clinical trials that include children. While the number has not changed since the passage of BPCA in 2002, NIH officials said that staff from these units often move on to hospitals throughout the country and enhance the pediatric research capacity nationwide. In addition, they said that an overall increase in pediatric research capacity nationwide in recent years has made it possible to conduct pediatric clinical trials at a number of other sites. They said that, on average, these pediatric pharmacology research units conduct more than 50 pediatric drug studies annually. Of these, as many as 20 pediatric drug studies are funded by drug sponsors. NIH officials told us that of the seven off-patent drugs being studied under BPCA with NIH funding through 2005, two were being conducted by these research units. NIH officials said that since on-patent written requests are not published, the full contribution of the research units under BPCA cannot be ascertained. NIH has sponsored a number of forums designed to increase the number of children included in drug studies. As shown in table 5, these forums generated advice and suggestions for NIH concerning drug testing from health experts, process improvements on drug studies and medication use with the pediatric community, and explanations of models and data related to research for children. NIH has also conducted meetings and entered numerous intra-agency and FDA agreements to strengthen its relationship with FDA and establish a firm commitment to study medical issues relevant to children. For example, NIH conducted a series of internal meetings in fiscal year 2004 to identify ongoing pediatric drug studies by the National Institute of Mental Health. As an outcome of these meetings, NIH identified and utilized data sets related to the study of lithium as it is used for the treatment of bipolar disorder in children. NIH will use this information to enhance its current understanding of the drug’s therapeutic benefit. In addition to providing a mechanism to study on-patent drugs, BPCA also contains provisions for the study of off-patent drugs. FDA initiates its process by issuing a written request to the drug sponsor to study an off- patent drug. If the sponsor declines to study the drug, FDA can refer the study of the drug to NIH for funding. NIH initiates the BPCA process for off-patent drugs by prioritizing the list of drugs that need to be studied. BPCA includes a provision that provides for the funding of the study of off- patent drugs by NIH. BPCA requires that NIH—in consultation with FDA and other experts—publish an annual list of drugs for which additional studies are needed to assess their safety and effectiveness in children. FDA can then issue a written request for pediatric studies of the off-patent drugs on the list. If the written request is declined by the drug sponsor, NIH can fund the studies. Few off-patent drugs identified by NIH as in need of study for pediatric use have been studied. From 2003 through 2006, NIH has listed off-patent drugs that were recommended for study by experts in pediatric research and clinical practice. By 2005, NIH had identified 40 off-patent drugs that it believed should be studied for pediatric use. Through 2005, FDA issued written requests for 16 of these drugs. All but one of these written requests were declined by drug sponsors. NIH funded pediatric drug studies for 7 of the remaining 15 written requests declined by drug sponsors through December 2005. NIH provided several reasons why it has not pursued the study of some off-patent drugs that drug sponsors declined to study. Concerns about the incidence of the diseases that the drugs were developed to treat, the feasibility of study design, drug safety, and changes in the drugs’ patent status have caused the agency to reconsider the merit of studying some of the drugs it identified as important for study in children. For example, in one case NIH issued a request for proposals to study a drug but received no response. In other cases, NIH is awaiting consultation with pediatric experts to determine the potential for study. Further, NIH has not received appropriations specifically for funding pediatric drug studies under BPCA. Rather, according to agency officials, NIH uses lump sum appropriations made to various institutes to fund pediatric drug studies under BPCA. In fiscal year 2005, NIH spent approximately $25 million for these pediatric drug studies. NIH anticipates spending an estimated $52.5 million for pediatric drug studies following seven written requests to drug sponsors issued by FDA from January 2002 through December 2005. These pediatric drug studies were designed to take from 3 to 4 years and will be completed in 2007 at the earliest. Where possible, NIH identifies another government agency or institute within NIH that might be able to meet the requirements of the written requests and conduct the pediatric drug studies. In cases where a government agency will conduct the pediatric drug studies, NIH institutes enter into intra- or interagency agreements for the studies. If those efforts fail, the agency develops and publishes requests for proposals for others to conduct the pediatric studies. NIH anticipates spending approximately $16.0 million for the funding of pediatric drug studies of four additional off-patent drugs for which FDA did not issue written requests—and therefore are not covered by the requirements of BPCA—but three of these drugs have since been listed by NIH in the Federal Register as needing study in children. (See table 6.) The drugs whose study NIH is funding without written requests were selected because of special circumstances that raised their priority for funding. NIH funded the study of daunomycin and methotrexate—both cancer drugs—before placing them on its 2006 list of drugs for study in children. NIH officials told us that the Children’s Oncology Group of the National Cancer Institute was already working with an appropriate group of patients and was at a critical stage in developing the pediatric drug studies that would produce data for both drugs, so pediatric drug studies were funded before the drugs were placed on the priority list. NIH officials also told us that ketamine is administered to more than 30,000 children for sedation each year. Studies done in animals, however, have suggested that the drug may lead to cell death in the brain. As a result, the drug cannot be ethically tested in children. NIH is therefore collaborating with FDA to conduct studies in nonhuman primates. NIH officials report that methylphenidate is used by an estimated 2.5 million school-aged children to treat attention deficit hyperactivity disorder. However, a recent study suggested some potential genetic toxicity of the drug. Because of these findings, the drug was targeted as a priority and NIH was able to fund some of the planned studies related to this drug. From January 2002 through December 2005, FDA issued 214 written requests for the study of on-patent drugs. The agency also issued 16 written requests for the study of off-patent drugs. Fewer written requests were issued and more were declined by drug sponsors under BPCA than under FDAMA. From January 2002, when BPCA was enacted, through December 2005, FDA issued or reissued 214 written requests for on-patent drugs, and drug sponsors declined 41 of those. FDA issued 68 written requests under BPCA for the study of on-patent drugs, 20 (29 percent) of which were declined by the drug sponsors. FDA reissued 146 written requests for on-patent drugs that were originally issued under FDAMA because the pediatric drug studies had not been completed at the time BPCA went into effect. Included in the 146 were 21 (14 percent) written requests that were subsequently declined by the drug sponsors. Therefore, drug sponsors accepted 173 written requests for the study of on-patent drugs under BPCA during this period. Under FDAMA, FDA issued 227 written requests. Drug sponsors did not conduct pediatric drug studies or submit study results for 30 of the 227 (13 percent) written requests issued under FDAMA (see fig. 3). FDA officials offered two primary reasons why fewer written requests were issued under BPCA than under FDAMA. First, according to FDA officials, when FDAMA was enacted, FDA and some drug sponsors had already identified a large number of drugs that they believed needed to be studied for pediatric use. By the time BPCA was enacted, written requests for the study of these drugs had already been issued. Second, FDA officials said there was a surge of written requests prior to the sunset of FDAMA. Agency officials expect the same surge to occur prior to the sunset of the pediatric exclusivity provisions of BPCA in 2007. FDA officials also offered a number of reasons that the proportion of written requests issued under BPCA that were declined was greater than that for those issued under FDAMA. While FDA does not track the reasons that drug sponsors decline specific written requests, FDA officials expect that a major reason that the written requests were declined is that the agency sometimes requests more extensive pediatric drug studies, and therefore more costly studies, than the sponsors would like to do. This may be the case even when the drug sponsors initiated the written request process. FDA officials said that upon consideration of FDA’s written requests, drug sponsors may make a business decision not to conduct the requested pediatric drug studies because they may be too costly for the expected return associated with pediatric exclusivity. Agency officials reported that since the drugs studied under FDAMA were more likely to be those with the greatest expected financial return or the easiest to study, they are not surprised at the higher proportion of pediatric drug studies declined under BPCA. Further, under BPCA drug sponsors are required to pay user fees—as high as $767,400 in fiscal year 2006—when study results are submitted for pediatric exclusivity consideration. As a result, the process of gaining pediatric exclusivity has become more expensive than it was under FDAMA when drug sponsors were exempt from such fees for pediatric drug studies. FDA officials said they are not discouraged by the increase in the number of written requests that have been declined. In 2001, FDA reported to Congress that the agency expected drug sponsors to conduct pediatric drug studies for 80 percent of written requests. The rate at which written requests for studies of on-patent drugs were accepted under BPCA— 71 percent—is close to the target of 80 percent, and it is substantially larger than the 15 to 30 percent of drugs that FDA officials have reported were labeled for pediatric use prior to the authorization of pediatric exclusivity under FDAMA and BPCA. The pediatric drug studies conducted under BPCA were complex and sizable, involving a large number of study sites and children. From July 2002 through December 2005, drug sponsors submitted study reports to FDA in response to 59 written requests. FDA made pediatric exclusivity determinations for 55 of those written requests by December 2005, and most—51, or 93 percent—were made in 90 days or less. For the 59 written requests for which study results were submitted to FDA, a total of 143 pediatric drug studies were conducted at 2,860 different study sites with more than 25,000 children participating (see table 7). In December 2005, FDA projected that for the drugs for which studies had not yet been submitted for review, there would be nearly 20,000 more children participating in the studies. Officials from FDA and NIH discussed a number of important strengths of BPCA. In our interviews with industry group representatives and in a public forum, a number of suggestions have also been made for ways that BPCA could be improved. FDA officials identified a number of important strengths of BCPA. Specifically, they commented on the following: Economic incentives to conduct pediatric drug studies. Because of the economic incentives in BPCA, FDA officials argue that many logistical issues inherent in conducting pediatric drug studies have been overcome. FDA may also issue a written request for pediatric drug studies for rare conditions, offering an additional incentive to develop medications for rare diseases that occur only in children. Availability of summaries of pediatric drug studies. FDA officials reported that the public dissemination of study summaries has ensured that study information is available to the health care community and has been useful to prescribers to know what has been learned about drugs’ use in children. Broad scope of pediatric drug studies. BPCA allows FDA to issue written requests for pediatric drug studies for the treatment of any disease, regardless of whether the drug in question is currently indicated to treat that disease in adults. For example, FDA issued a written request for the study of a drug currently indicated to treat prostate cancer. The drug is being tested in children to see if it is effective in treating early puberty in boys. Use of dispute resolution as a negotiating tool in ensuring labeling changes. Although FDA has never invoked its authority under BPCA to use the dispute resolution process for making labeling changes, it has been an important negotiating tool. FDA officials indicated that when the agency has expressed its intention to use the process, the issues that had been raised in labeling negotiations were effectively resolved. Improved safety through focused pediatric safety reviews. BPCA’s requirement that FDA conduct additional monitoring of adverse event reports for 1 year after a drug is granted pediatric exclusivity has been useful to FDA in prioritizing safety issues for children. For example, an analysis of a drug 1 year after pediatric exclusivity was granted showed that there were deaths among children as a result of overuse or misuse of the drug. This led the agency to amend the labeling regarding the appropriate population for the drug. NIH officials said they have found the process of developing the list of drugs important for study in children to be extremely helpful. NIH officials told us that since the inception of BPCA, they have learned a great deal about existing gaps in the drug development process for children, including a lack of data about which drugs are used by children and how frequently. To gather additional information, NIH has contracted for literature reviews to decrease the possibility that unnecessary pediatric drug studies are conducted. These officials also stated that BPCA and the development of the priority list have helped to solidify an alliance between NIH and FDA, which has led to discussions and resolutions of scientific and ethical issues relating to pediatric drug studies. The Institute of Medicine convened a forum on pediatric research in June 2006 where forum participants made suggestions for how BPCA could be improved. In addition, we discussed suggestions for improving BPCA with interest group representatives. Forum participants suggested that the timing of the determination of pediatric exclusivity should parallel the scientific review of a drug application and that both should be within 180 days of FDA receiving the results from the pediatric drug studies. FDA’s ability to assess the overall quality of the pediatric drug studies in the 90 days currently allotted for the review was questioned. Some forum participants also stated that a longer review period could result in different determinations in some cases. For example, FDA’s scientific review of data related to the study of one drug showed that the children participating in the pediatric drug studies had not received the treatments as the drug sponsors had suggested in their description of the study results. While the agency had granted the drug sponsor pediatric exclusivity based on its 90-day review to determine pediatric exclusivity, it might not have done so based on what was learned during the longer, 180-day scientific review. In addition, it was suggested that drug sponsors be required to submit their study results for pediatric exclusivity determination at least 1 year prior to patent expiration. This would allow the generic drug industry time to better plan its release of drugs. We were told that sometimes generic drugs have had to be destroyed because pediatric exclusivity determinations were made after the generic version of the drug had been manufactured and the drug’s expiration date would not allow the product to be sold. Representatives from interest groups would like the written requests to be public information and would also like FDA to publicly announce when it receives study results that have been submitted in response to a written request. This would allow the generic drug industry to better schedule the introduction of generic drugs into the market. Other suggestions for how the study of off-patent drugs could be more effectively encouraged were offered at the forum. A forum participant suggested that methods similar to those being adopted by the European Union be implemented. According to forum participants, under new legislation in Europe, companies that study off-patent drugs will be offered a variety of incentives, such as 10 years of data protection (meaning that the data generated to support the marketing of the drug cannot be used to support another drug, in an effort to delay competition), the right to use the existing brand name (to enable the drug sponsor to capitalize on existing brand recognition), and the ability to add a symbol to the drug labeling indicating the drug has been studied in children. Another suggestion was that current fees paid by drug sponsors for review of their drug applications could be used to fund the study of off-patent drugs (as well as on-patent drugs that drug sponsors decline to study). These fees—$767,400 for a new drug application and $383,700 for a supplemental drug application in fiscal year 2006—are collected from drug sponsors when study results are submitted to FDA for review and consideration of pediatric exclusivity. In addition to the contact named above, Thomas Conahan, Assistant Director; Shaunessye Curry; Cathleen Hamann; Martha Kelly; Julian Klazkin; Carolyn Feis Korman; Gloria Taylor; and Suzanne Worth made key contributions to this report.
About two-thirds of drugs that are prescribed for children have not been studied and labeled for pediatric use, which places children at risk of being exposed to ineffective treatment or incorrect dosing. The Best Pharmaceuticals for Children Act (BPCA), enacted in 2002, encourages the manufacturers, or sponsors, of drugs that still have marketing exclusivity--that is, are on-patent--to conduct pediatric drug studies, as requested by the Food and Drug Administration (FDA). If they do so, FDA may extend for 6 months the period during which no equivalent generic drugs can be marketed. This is referred to as pediatric exclusivity. BPCA required that GAO assess the effect of BPCA on pediatric drug studies and labeling. As discussed with the committees of jurisdiction, GAO (1) assessed the extent to which pediatric drug studies were being conducted under BPCA for on-patent drugs, including when drug sponsors declined to conduct the studies; (2) evaluated the impact of BPCA on labeling drugs for pediatric use and the process by which the labeling was changed; and (3) illustrated the range of diseases treated by the drugs studied under BPCA. GAO examined data about the drugs for which FDA requested studies under BPCA from 2002 through 2005. GAO also interviewed officials from relevant federal agencies, pharmaceutical industry representatives, and health advocates. Drug sponsors have initiated pediatric drug studies for most of the on-patent drugs for which FDA has requested studies, but no drugs were being studied when drug sponsors declined these requests. Sponsors agreed to 173 of the 214 written requests for pediatric studies of on-patent drugs. In cases where drug sponsors decline to study the drugs, BPCA provides for FDA to refer the study of these drugs to the Foundation for the National Institutes of Health (FNIH), a nonprofit corporation. FNIH had not funded studies for any of the nine drugs that FDA referred as of December 2005. Most drugs (about 87 percent) granted pediatric exclusivity under BPCA had labeling changes--often because the pediatric drug studies found that children may have been exposed to ineffective drugs, ineffective dosing, overdosing, or previously unknown side effects. However the process for approving labeling changes was often lengthy. It took from 238 to 1,055 days for information to be reviewed and labeling changes to be approved for 18 drugs (about 40 percent), and 7 of those took more than 1 year. Drugs were studied under BPCA for the treatment of a wide range of diseases, including those that are common, serious, or life threatening to children. These drugs represented more than 17 broad categories of disease, such as cancer. The Department of Health and Human Services stated that the report provides a significant amount of data and analysis and generally explains the BPCA process, but expressed concern that it did not sufficiently acknowledge the success of BPCA or clearly describe some elements of FDA's process. GAO incorporated comments as appropriate.
In 1981, Congress created the research tax credit to encourage business to do more research. The credit has never been a permanent part of the tax code. Since its enactment on a temporary basis in 1981, the credit has been extended six times and modified four times. The research tax credit has always been incremental in nature. Taxpayers receive a credit only for qualified research spending that exceeds a base amount. Beginning in 1981, taxpayers could reduce their tax liability by 25 percent of qualified research that exceeded a base amount that was equal to the average research expenditure of the 3 previous years or a base amount that was equal to 50 percent of the current year’s expenditures, whichever was greater. The Tax Reform Act of 1986 modified the credit by reducing the rate to 20 percent of qualified spending above the base amount and more narrowly defining qualified expenditures. The credit was changed again in 1988 to require that taxpayers reduce their deductions for research expenditures by an amount equal to 50 percent of the credit they claim. In 1989, this amount was increased to 100 percent of the credit they claim. The Omnibus Budget Reconciliation Act of 1989 changed the method for calculating the base amount. The base calculated as the average expenditure of the 3 previous years was replaced by a base amount equal to the ratio of total qualified research expenses to total gross receipts for 1984 through 1988, multiplied by the average amount of taxpayer’s gross receipts for the preceding 4 years. This base change removed the link between increases in current spending and future base amounts that had reduced the incentive to undertake additional research spending under the prior method for calculating the base. The evaluation of the effectiveness of the credit requires first estimating the additional research spending stimulated by the credit. Ideally, this additional spending should then be evaluated according to the net benefit it produces for society. However, this net social benefit is difficult to determine because it depends on how the research of some companies affects the costs and products of other companies. Some researchers who have studied the credit have instead calculated a “bang-per-buck” ratio, the amount of spending stimulated per dollar of revenue cost. Once a decision has been made to provide some form of credit, this ratio is a relevant criterion for assessing alternative designs. Most early studies of the research tax credit found that, although the credit may have stimulated some additional research spending, the effect on spending was relatively small. For example, Edwin Mansfield in his 1986 study asked a random sample of corporate officials to assess the effect of the credit on research spending and estimated from their responses that the additional spending induced by the credit equaled about one-third of the revenue cost. Robert Eisner, et al., in 1984 compared the growth of research spending that qualified for the credit and spending that did not qualify in 1981 and 1982, and found no positive impact of the credit on the growth of research spending. Other early studies relied on estimates of the responsiveness of research spending to reductions in its price to arrive at similar conclusions. Because the credit is effectively a reduction in the price of research, the greater the responsiveness of research spending to price reductions, the more additional spending the credit is likely to stimulate. Economists measure the responsiveness in terms of the “price elasticity” of spending, which shows the percentage increase in spending that would result from a 1-percent reduction in the after-tax price of research and development (R&D). In 1989, we reported that the best available evidence indicated that research spending is not very responsive to price reductions. Most estimates of the price elasticity of spending fell in the range of –0.2 to –0.5, implying that a 1-percent reduction in the price of research would lead to between a 0.2 percent and 0.5 percent increase in spending. In our 1989 report, we used Internal Revenue Service (IRS) data to estimate that between 1981 and 1985, the credit provided companies with a benefit of 3 to 5 cents per dollar of additional spending. This benefit to companies is equivalent to a reduction in the price of research. Combining these price reductions with the range of elasticity estimates, we estimated that the credit stimulated between $1 billion and $2.5 billion of additional research spending between 1981 and 1985 at a cost of $7 billion in tax revenue. Thus, we estimated that each dollar of taxes forgone stimulated between 15 and 36 cents of research spending. Reports on the research tax credit by KPMG Peat Marwick and by the Office of Technology Assessment (OTA) include reviews of studies of the credit’s effectiveness that were issued since our 1989 report. The KPMG Peat Marwick report concludes that the studies provide evidence that the spending stimulated by the credit equals or exceeds its revenue cost. Specifically, the report concludes that the recent studies show that one dollar of credit stimulates about one dollar of R&D spending in the short run, and as much as two dollars in the long run. According to the KPMG Peat Marwick report, the recent studies KPMG Peat Marwick reviewed provide better estimates of the effectiveness of the credit than earlier studies because they analyze longer data series and because they use what it considered to be better methodologies for analyzing the effect of the credit. The OTA report reviewed the same recent studies as KPMG Peat Marwick and observed that the available literature generally reports that the credit stimulates about one dollar of additional spending per dollar of revenue cost. However, OTA pointed out that the studies contain data and methodological uncertainties. For our review, we evaluated the studies cited in these two reports as well as other studies not included in either report. We also addressed some methodological issues that were not addressed in these reports and provided a more detailed evaluation of each study. Our first objective was to evaluate recent studies of the research tax credit for the adequacy of the data and methods used to determine the amount of research spending stimulated per dollar of revenue cost. In particular, we were to determine if recent studies provided adequate evidence to conclude that each dollar of tax credit stimulates at least one dollar of research spending in the short run and, over the long run, stimulates about two dollars of research spending. Our second objective was to identify the factors other than spending per dollar of revenue cost that determine the credit’s value to society. To meet our first objective, we reviewed the six studies cited by the KPMG Peat Marwick report and two studies that the report did not cite that we identified from our review of the literature on the credit and from our interviews with authors of research tax credit studies. In general, these recent studies were published since our 1989 report, although one study cited by KPMG Peat Marwick was published in 1987. The studies are listed in appendix I. We used standard statistical and economic principles in our review and evaluation of the studies of the research tax credit. We relied upon internal economists to carry out this evaluation. In our evaluation, we considered such factors as the adequacy of the data used to estimate the effect of the credit, the adequacy of the variables used to measure the incentive provided by the credit, and the sensitivity of the estimates to assumptions about taxpayer behavior. We also interviewed the authors of the studies of the research tax credit and requested comments on a draft of our evaluation of their studies. We received comments from the authors of six of the eight studies that we reviewed. All agreed that our report accurately summarized their studies. However, not all agreed with the importance of the data and methodological limitations that we identified in their work. A summary of their comments appears on pages 12 and 13. We also requested comments on a draft of our report from the authors of the KPMG Peat Marwick report. They stated that they appreciated the opportunity to comment on our report but that after reviewing our report, they had no comments to submit. To meet the second objective, we reviewed academic articles and government studies about the determinants of the social benefits of research spending. We also reviewed studies that describe the difficulties encountered when attempting to measure the full social costs and benefits of research. We did our work in Washington, D.C., from December 1995 through January 1996 in accordance with generally accepted government auditing standards. The recent studies that we reviewed provided mixed evidence on the amount of spending stimulated per dollar of revenue cost. Of the eight studies we reviewed, three supported the claim that one dollar of credit stimulated about two dollars of additional research spending. Another study, which did not directly evaluate the research tax credit, reported estimates of the responsiveness of research spending to other tax incentives. These estimates appear to be consistent with the claim that the credit stimulates spending that exceeds its revenue cost. However, two studies reported that the credit stimulated spending that was less than its revenue cost, and another two of the studies reported estimates of additional spending that do not appear to support the claim that spending exceeded revenue cost. One of these latter studies does not compare additional research spending to revenue cost but does report an estimate of additional spending that is likely to be less than the revenue cost. The other study reported that additional spending exceeded revenue cost through 1985 but reported estimates of additional spending that were likely to be less than the revenue cost after 1985. Most of the recent studies used more sophisticated methods than prior studies when analyzing the effectiveness of the credit. For example, the studies improved on prior studies by using methods that attempt to distinguish the credit’s effect from other influences on research spending like the potential size of the market for the product of the research. However, the studies have the following data and methodological limitations. The most appropriate data for assessing the effect of the credit are tax return data. These confidential tax return data were not available to the authors of the studies. Instead, they used publicly available data sources, chiefly the COMPUSTAT data service, which do not accurately reflect the incentive provided by the credit. This incentive depends on a company’s ability to earn credits by having qualified research spending that exceeds the base amount and on a company’s ability to claim its credits by having sufficient taxable income. We concluded from our own comparison of tax return data with COMPUSTAT data and from studies by other researchers that differences in the measurement of research spending and taxable income make COMPUSTAT an unreliable proxy for tax return data when analyzing the credit. Because studies that use the public data cannot accurately determine the credit’s incentive, they may not accurately measure the amount of spending stimulated by the credit. Three studies that analyzed the credit at the industry level may not accurately measure the credit’s incentive. Analysis at the industry level of aggregation does not reflect the different incentives the companies face and their different responses to these incentives. Industries include firms that earn no credit because their spending is less than the base amount or claim no credit because they have no tax liability. An analysis at the industry level that assigns the same incentive to all these firms would not capture these differences and is not likely to produce very precise measures of the credit’s effect on research spending. The eight studies all used measures of the tax incentive that did not incorporate important interactions with other features of the tax code. For example, studies that measured the tax incentives by reductions in the cost of research and development due to tax policy changes did not include all the research and development provisions of the tax code. In addition to the credit, the cost of R&D depends on other tax code provisions like those governing the allocation of research expenses between foreign sources and the United States. The studies included some of these provisions but not others. Including all relevant provisions of the code may change the estimates of the research credit’s effectiveness. The estimates in several of the studies were highly sensitive to assumptions made about the data and taxpayer behavior. For example, one study’s estimate of the responsiveness of spending to tax incentives was reduced by half when more firms were included in the sample studied or the assumptions were changed on how taxpayers allocate research and development expenses between domestic and foreign sources. Other studies that differed in terms of how they measured the tax incentive produced significantly different estimates of the spending stimulated by the credit. This sensitivity to the assumptions made by the authors leads us to conclude that much uncertainty remains about the effect of the credit on research spending. The estimates presented in the most recent studies do not provide all the information needed to evaluate the effectiveness of the latest version of the credit. The amount of spending stimulated per dollar of revenue cost depends on how the design of the credit affects the incentive to increase research spending and on how the design affects the revenue cost. Only one of the recent studies estimated the effectiveness of the credit for years after its redesign in 1989, and the author of that study is not confident of her results for the post-1989 period. Some reviewers have implied that the recent studies’ estimates of the responsiveness of research spending to price reductions—the price elasticity of spending—are equivalent to the amount of research spending stimulated by the credit per dollar of revenue cost. They said that using an empirical estimate that a 1-percent reduction in the price of R&D will lead to a 1-percent increase in research spending implies that one dollar of credit will lead to one dollar of additional spending. However, these may not be equivalent estimates because the amount of research spending stimulated by the credit per dollar of revenue depends on the design of the credit as well as the responsiveness of spending to price reductions. For example, the credit’s effect on spending and revenue cost will depend on whether it is designed as a flat credit, which applies to total research spending, or as an incremental credit, which applies only to spending that exceeds a base amount. For the same responsiveness of spending to price reductions, a flat credit with a 10 percent rate should stimulate roughly the same amount of spending as an incremental credit with the same rate because both credits provide the same 10 percent effective reduction in the price of research. However, the flat credit would allow a company to earn a credit equal to 10 percent of its total qualified research spending, while the incremental credit would give the company a credit equal only to 10 percent of the difference between its current qualified spending and some base spending amount. Consequently, the 10 percent flat credit would have a higher revenue cost and, therefore, a lower bang-per-buck than the 10 percent incremental credit. Incremental credits that differ from one another in terms of how base spending is defined can also differ substantially in terms of how much spending they stimulate per dollar of revenue cost. The bang-per-buck of the current incremental credit may be significantly different from that of the credit that existed prior to 1990. As we reported in our May 1995 testimony, the redesign of the credit in 1989 should have increased the size of the incentive provided per dollar of revenue cost. However, as we also reported in our testimony, there is evidence that the incentive provided by the redesigned credit had eroded over time and that the revenue cost of the additional spending stimulated by the credit had increased. The value of the research tax credit to society cannot be determined simply by comparing the amount of research spending stimulated by the credit versus the credit’s revenue cost. To fully evaluate the credit’s effect, one would have to (1) estimate the total benefits gained by society from the research stimulated by the credit; (2) estimate the resource costs of doing the research; (3) estimate the administration, compliance, and efficiency costs to society resulting from the collection of taxes (or the borrowing of money) required to fund the credit; and (4) compare the benefits to the costs. Simply knowing how much additional research spending the credit stimulates does not tell you the value of that research to society. Similarly, the amount of revenue needed to fund the credit does not tell you the total cost to society of the credit. There is a general consensus among economists that research is one of the areas where some government intervention in the marketplace may improve economic efficiency. From society’s point of view, individual companies may invest too little in research if the return on their investment is less than the full benefit that society derives from the research. If the research leads to new products, reduces costs or increases productivity for other companies and consumers throughout the economy, the benefits to society may exceed the return on investment of the companies that conduct the research. Therefore, companies may not do as much research as society finds desirable, and government policy to encourage research may be viewed as appropriate. However, as the Joint Committee on Taxation and OTA have noted, it is also possible to decrease economic efficiency by encouraging too much spending on research. Because not all research generates social benefits that exceed the returns to companies conducting the research, encouraging more research may not be economically efficient. It would be very difficult to determine, given the difficulty of measuring the social benefit, whether the research tax credit increases or decreases economic efficiency. No one that we are aware of, including the authors cited by KPMG Peat Marwick, has undertaken a study that could answer that question conclusively. As previously discussed, we requested comments from the authors of the KPMG Peat Marwick report. After reviewing a draft of our report, they stated that they had no comments to submit. We also requested that the authors of the eight studies of the research tax credit that we reviewed provide comments on our evaluation of their studies. The following summarizes the comments of the six authors who responded to our request. All of the authors we interviewed agreed that the publicly available data contain measurement errors that may affect their estimates of the credit’s effectiveness. However, two of the authors said that they believed that their estimates would not change significantly if tax return data were used. They said that either the data problem was minor or that statistical methods used to correct the measurement error reasonably addressed the problem. Two authors also said that they believed that their elasticity estimates would not change significantly but noted that predicting what would happen to the estimates when better data are used is difficult. Two authors agreed with our assessment of the importance of the potential inaccuracies from using COMPUSTAT data. As explained more fully in appendix I, we have concluded that COMPUSTAT data are not a suitable proxy for tax return data when analyzing the credit. Although the authors agree that COMPUSTAT data are not the best data, they disagree among themselves about the importance of this issue. We acknowledge that statistical methods can be used to help address this issue of measurement error, but the success of these methods is difficult to assess. We conclude that, because the most appropriate data were not used in these studies, uncertainty remains about the responsiveness of spending to the credit. The methodological limitations that we identified were not addressed in the comments of all the authors because they were not relevant to every study. The authors who did comment disagreed about the importance of the methodological limitations. One author who addressed the importance of correctly incorporating the features of the tax code said she believed that some of the studies’ estimates of the effect of the credit were overestimated because the method of estimation excluded tax preferences available for investments other than research. Another author commented that the sensitivity of the estimates to assumptions about taxpayer expectations accounted for the difference in estimates across the studies. However, two authors who agreed with our identification of the methodological limitations in their work did not believe that the limitations had a significant effect on their estimates. Finally, the authors who commented agreed that analyzing the credit at the firm level rather than at the industry level produces more accurate estimates. However, one author said that he did not believe that his industry level estimates would change significantly if they were based on analysis at the firm level. As explained in appendix I, we found that estimates reported in the studies varied significantly when authors employed different assumptions about the data and taxpayer behavior. This sensitivity of the estimates to authors’ assumptions leads us to conclude that much uncertainty remains about the effect of the credit on research spending. We are sending copies of this report to pertinent congressional committees, the Secretary of the Treasury, KPMG Peat Marwick, the individual authors, and other interested parties. Copies will be made available to others upon request. The major contributors to this report are listed in appendix II. If you have any questions, please call me on (202) 512-9044. We classified the studies of the effectiveness of the research tax credit according to the level of aggregation at which the data are analyzed and the method used to measure the incentive provided by the credit. The studies analyze the credit using firm level data or using data aggregated to the industry level. The incentive provided by the credit is measured by a categorical or “dummy” variable, or by a variable measuring the “tax price” of research and development (R&D). The categorical variable measures the change in R&D spending due to the presence or absence of the tax credit or to the ability of firms to use the credit, while the tax price variable measures the change in spending due to the effect of tax policy on the cost of R&D. We reviewed the six studies cited by KPMG Peat Marwick in their report.We also reviewed two recent studies of the credit’s effectiveness that were not cited by KPMG Peat Marwick. The following summarizes the studies and presents our evaluation of them. Martin Neil Baily and Robert Z. Lawrence use National Science Foundation (NSF) data to examine the effect of the credit for 1981 through 1985 in their 1987 study, and for 1981 through 1989 in their 1992 study. The 1987 study analyses the credit using a dummy variable that indicates the years in which the credit was in effect, while the 1992 study uses a variable that reflects changes in the credit’s incentive due to changes in the tax law. Both studies produce essentially the same finding: the percentage increase in R&D spending in response to each percentage decrease in the price of R&D—the price elasticity of R&D—is approximately equal to one. Using this elasticity, Baily and Lawrence estimate that the credit generated about two dollars of R&D for each dollar of tax revenue forgone. Theofanis P. Mamuneas and M. Ishaq Nadiri use industry level data for 1956 through 1988, chiefly drawn from the Bureau of Labor Statistics and NSF. Their method is to construct a rental price variable for R&D capital that reflects the research tax credit and the provisions for the immediate expensing of research expenditures. To construct this variable, the authors acknowledge that they assume that the firms in their industries have sufficient tax liability to claim the credit, that their spending exceeds the base amount, and that spending is less than twice the base amount. Their estimates of price elasticities range from –1.0 for the three aggregate industries of textiles and apparel; lumber, wood products, and furniture; and other manufacturing to –0.94 for scientific instruments. On the basis of these elasticities, they calculate that the average additional research spending stimulated per dollar of revenue cost was about 95 cents for the period 1981 to 1988. James R. Hines’ study uses firm level data from COMPUSTAT for 1984 through 1989. His method is to construct a tax price variable that measures how the costs of R&D are affected by the rules for allocating R&D expenses between U.S. and foreign sources under section 1.861-8 of U.S. Treasury regulations. His tax price does not include the research tax credit or other R&D related features of the tax code. Hines’ preferred estimates of the R&D price elasticity range from –1.2 to –1.6. However, when he increases his sample size to include firms previously excluded due to merger activity, these elasticity estimates drop to a range of –0.5 to –0.6. Also, the elasticities decrease to –0.5 to –0.9 when Hines changes his assumptions about how firms allocate their research expenses. Hines does not apply these elasticities to the credit or calculate how much spending is induced by the credit. Bronwyn H. Hall uses firm level data from COMPUSTAT for 1977 through 1991. Her tax price variable measures how the research tax credit and expensing provisions affect the cost of R&D. Hall estimates a short-run price elasticity of R&D of –1.5 and a long-run price elasticity of –2.7. However, she advises that the long run elasticity be viewed with caution, as it is likely to be “quite imprecise.” Hall estimates that the additional spending induced by the credit in the short run was $2 billion per year, while the tax revenue cost was about $1 billion per year. Philip Berger’s study uses firm level data from COMPUSTAT for 1975 through 1989. He measures the effect of the credit using a dummy variable that indicates the years in which a firm is able to use the credit, i.e., the firm has a positive tax liability in the current or preceding 3 years. Berger uses the results of this analysis to estimate that the credit induced $2.70 billion of additional spending per year from 1982 through 1985. He compares this yearly increase to a yearly revenue cost of $1.55 billion to conclude that additional spending per dollar of forgone revenue was $1.74 during 1982 through 1985. Although Berger does not calculate the amount of spending per dollar of forgone revenue for years after 1985, his study shows that the credit was less effective in later years. C. W. Swenson’s study uses firm level data from COMPUSTAT for 1975 through 1988. He also uses a dummy variable that indicates the years in which a firm is able to use the credit. However, the ability to use the credit in his study depends not only on current tax status but also on future tax status and the firms’ planned R&D spending. Swenson estimates that total additional spending induced by the credit was $2.08 billion during 1981 through 1985. Swenson does not compare this estimate to the revenue cost. Janet W. Tillinger’s study uses firm level data drawn chiefly from COMPUSTAT for 1980 through 1985. She measures the effect of the credit using a dummy variable that indicates the years in which firms have research spending that exceeds the base amount. Tillinger uses the results of this analysis to estimate that the credit induced about 19 cents of increased spending per dollar of forgone revenue for 1981 through 1985, which she notes is at the lower end of the estimates from our 1989 study. Tillinger also finds that the effectiveness of the credit varies by the type of firm. When the firms are classified according to the opportunity costs of alternatives to R&D investment like the payment of dividends, she finds that the additional spending ranges from 8 cents to 42 cents per dollar of forgone revenue. The studies reviewed above provide mixed evidence for claims about the amount of spending induced by the credit per dollar of forgone revenue. Of the six studies cited by KPMG Peat Marwick, three studies (the two by Baily and Lawrence, and Hall’s study) support the claim that each dollar of tax revenue stimulated about two dollars of additional research spending. Hines’ study reports a price elasticity of research spending that, if applied to the research tax credit, is likely to be consistent with the finding that additional spending exceeds the revenue cost. Two studies cited by KPMG Peat Marwick, however, may not support the claim that induced research spending exceeds the revenue cost of the credit. Swenson’s study estimates that the credit induced additional spending of $2.08 billion from 1981 through 1985. He notes that his estimate is “comparable to . . . GAO estimates of $1 billion to $2.9 billion for the same period.” Swenson states that he does not calculate a bang-per-buck measure because he does not have access to the taxpayer data necessary to make this calculation. However, Swenson states that his estimate of additional spending is not likely to support the claim that the spending stimulated by the credit exceeded its revenue cost. Berger’s study estimates that additional spending exceeded revenue cost in the period 1982 through 1985, but the study may not support this claim in the years after 1985. Berger does not calculate a bang-per-buck measure for years after 1985. However, his study does show that the credit was less effective in these years and that the credit was not a statistically significant determinant of R&D spending in the years after 1986. The two studies that were not cited by KPMG Peat Marwick do not support the claim that induced spending exceeded the revenue cost of the credit. The Mamuneas and Nadiri study estimates that the credit stimulated additional spending that was slightly less than the revenue cost during 1981 through 1988, while the Tillinger study estimates that additional spending was significantly less than revenue cost during 1981 through 1985. Most of the studies we reviewed use more sophisticated statistical methods and more years of data than prior studies. For example, most of the recent studies use methods that attempt to distinguish the credit from other factors that influence research spending like market size and the availability of investment funds. Some studies also include the influence of taxpayers’ expectations about factors like the future tax status of firms when determining the effect of the credit on current spending. Nevertheless, despite these advantages over prior studies, these studies have data and methodological limitations that are significant enough to lead us to conclude that much uncertainty remains about the true responsiveness of research spending to tax incentives. None of the studies use the best data for assessing the effect of the credit. They all use publicly available COMPUSTAT or NSF data, which are not the most appropriate data for this purpose. The incentive provided by the credit depends on companies’ ability to earn credits by having qualified research spending that exceeds the base amount, and to claim credits by having tax liabilities. Information on qualified research spending and tax liabilities can be most accurately determined from confidential IRS data. The publicly available data will not be as accurate because they use definitions of research spending and tax liabilities that are different from IRS. These tax return data were unavailable to these researchers. In her study, Hall recognizes the limitations of publicly available data and attempts to correct the errors in her measurements. However, it is difficult to determine how successful her efforts are without repeating her analysis using the tax return data. In any case, the estimates of all the studies that we reviewed would be more reliable if they were based on IRS data. The tax price variables and the dummy variables used in the studies to capture the incentive provided by the credit depend on companies’ ability to earn credits and claim them against their tax liabilities. COMPUSTAT and NSF data do not accurately reflect credits earned and claimed. The ability to earn credits depends on the relationship of qualified research spending to the base amount. COMPUSTAT and NSF data do not accurately reflect this relationship because both data sources include spending that does not qualify for the credit. Most notably, spending reported by COMPUSTAT includes spending overseas that would not be qualified research spending. In our 1989 report, we compared COMPUSTAT data with tax return data and concluded that COMPUSTAT data are not a suitable proxy for tax return data when analyzing the credit. For example, when we compared the growth rate of COMPUSTAT research spending with qualified research spending for a sample of firms contained in both the COMPUSTAT database and IRS files, we found that the rates varied considerably over the period 1981 through 1985. Qualified spending grew 1.46 times as fast as COMPUSTAT spending in the 1980 to 1981 period, but only 0.72 times as fast in the 1983 to 1984 period. The relationship between spending and the base using COMPUSTAT may not accurately reflect the relationship using tax data, and, therefore, both tax price variables and dummy variables are likely to be inaccurate. The ability to claim credits depends on the tax status of the firms. COMPUSTAT contains information on taxable income and loss carryforwards, but studies have shown that COMPUSTAT does not always accurately or consistently reflect IRS data. Furthermore, COMPUSTAT data contain no information on the general business credit, which limits the ability of companies to claim the credit. Again, because both the tax price variables and the dummy variables depend on the ability of firms to claim the credit, we conclude that they will be measured inaccurately when based on COMPUSTAT data. The reliability of the Baily and Lawrence studies and the Mamuneas and Nadiri study is also limited by the level of aggregation at which the data are analyzed. Their analyses of the credit at the industry level are unlikely to produce very precise measures of the credit’s effect. Their analyses do not reflect the different incentives that companies face and the different responses to these incentives. Industries will include firms that earn no credit because their spending is less than the base, firms that cannot claim the credit because they have no tax liability, and firms subject to the 50-percent base limitation. A measure that assigns the same incentive to all these firms will not capture these differences and is not likely to yield precise or reliable estimates of the credit’s effect. The reliability of the studies that we reviewed is also limited by the methods used to measure the incentive provided by the credit. The studies use measures of the tax incentive that do not incorporate important interactions of the research tax credit with other features of the tax code. For example, Hines studies the effect of the section 1.861-8 allocation rules on research spending but does not analyze the effect of other features of the tax code such as the research tax credit. Hall, on the other hand, analyzes the research tax credit but does not incorporate the section 1.861-8 allocation rules in her study. Hall believes that including the rules in her analysis would not make “an enormous difference” because the firms subject to the rules probably represent only a small part of her sample. However, she does say that including the rules would make her estimates more precise. Hines states that it is “difficult to know for sure” the effect on his estimates of including interactions with other features of the code. The estimates in some of these studies are also uncertain because they are sensitive to assumptions made about the data and taxpayer behavior. Hines’ estimate of the effect of tax policy on spending is reduced by half when he includes more firms in his sample or changes his assumptions about how companies allocate R&D expenses. Hall notes that estimation at the firm level involving investments like R&D is difficult and sensitive to assumptions made when specifying the models. This sensitivity of the results is also illustrated by the three studies using dummy variables where differences in the approach to modeling taxpayer behavior and measuring the effect of the credit yield very different estimates. Although some of the studies attempt to measure the degree of this sensitivity and correct for it, the success of these efforts is difficult to assess. The authors of these studies themselves, in many cases, advise that their results be used with caution and recognize that their estimates would benefit from further research. For example, when describing her estimates of the spending induced by the credit, Hall states that “it needs to be kept firmly in mind that my tax estimates are not likely to be as good as those constructed using IRS data.” She also mentions, in the 1992 version of her paper, that her analysis “needs more investigation for robustness over time and industry.” When discussing the limitations imposed by not including interactions with other aspects of the tax code, Hines notes that his results should be used with caution because of these “restrictive assumptions built into the estimated R&D responses to tax changes.” The current version of the credit has not been studied extensively, and little is known about the actual incentives provided by the current credit. The research tax credit was fundamentally restructured in 1989. Hall’s study, which spans the years 1977 through 1991, is the only study we reviewed that covers any tax years after the credit was changed. However, her data contain only 2 years—1990 and 1991—under the revised credit structure. Hall notes that her estimate of additional spending for these years amounts to about 10 percent of the total R&D and that this “amount is almost too large to be credible . . . and deserves further investigation as more data become available.” She indicated that her estimates of additional spending may be less reliable because she did not have data on the tax status of firms after 1991 that were needed to measure the incentive provided by the revised credit. James Wozny, Assistant Director, Tax Policy and Administration Issues Kevin Daly, Senior Economist Anthony Assia, Senior Evaluator The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed eight studies of the research tax credit, focusing on the: (1) adequacy of the studies' data and methods to determine the amount of research spending stimulated per dollar of foregone tax revenue; and (2) other factors that determine the credit's value to society. GAO found that: (1) four studies supported the claim that, during the 1980s, the research credit stimulated research spending that exceeded its revenue cost, but the other four studies did not support the claim or were inconclusive; (2) all of the studies had significant data and methodological limitations that made it difficult to evaluate industry's true responsiveness to the research tax credit; (3) the studies did not use tax return data to determine the credit's incentive because the authors did not qualify for access to such data; (4) publicly available data were not a suitable substitute for the tax return data because public sources used different definitions of taxable income and research spending; (5) the studies' analytical methods, such as use of industry aggregates and failure to incorporate important tax code interactions, made their findings imprecise and uncertain; (6) there was little research on the latest design of the credit to determine its effect on incentives and costs; (7) the studies' evidence was not adequate to conclude that a dollar of research tax credit would stimulate a dollar of additional short-term research spending or about two dollars of additional long-term research spending; and (8) to measure the credit's true impact, the studies would need to assess the research's net benefit to society, resource costs of research, and administrative, compliance, and efficiency costs of funding the credit.
Two-thirds of all crude oil consumed in the United States is used by the transportation sector, with gasoline accounting for two-thirds of that total. The second largest consumer of crude oil is the industrial sector, including refineries and petrochemical industries, which account for another 25 percent of that total. In the residential and commercial sectors, crude oil consumption was as high as 15 percent of that total in 1970 but had since fallen to 6.5 percent in 2004. Similarly, the burning of crude oil to generate electricity peaked in 1975 at 8.6 percent, declining to 2.5 percent in 2004. Crude oil is supplied through onshore and offshore domestic production and international imports. In 2005, the United States produced 6.8 million barrels per day (bpd), a 5.5 percent decrease from 2004. California is currently the fourth largest oil producer (including onshore and offshore production) in the United States, behind Louisiana, Texas, and Alaska, respectively, but its production has declined at a rate of 2.4 percent per year for the past 10 years. California produced 731,150 bpd in 2004 (the most recent year for which numbers are available). Figure 2 shows the decline in California crude oil production and the quantity of various grades of crude oil produced in California. In 2005, the United States imported 13.5 million bpd, or 27.1 percent of total global oil imports. The EIA estimates that California imported 40.7 percent of all crude processed by the state’s refineries, with the bulk of imports coming from Saudi Arabia, Ecuador, Iraq, and Mexico. The remainder of California’s crude oil was either produced in state, or transported by tanker from Alaska. Figure 3 shows the sources of California’s crude oil and the state’s major refining centers as of 2005, the last full year of data available, and figure 4 shows the trend of California’s crude oil supply over the past two decades. WTI crude oil is a widely traded oil that is commonly used as a benchmark for measuring crude oil prices in the United States. Prices for WTI are collected at Cushing, Oklahoma. Crude oils delivered by pipeline generally use WTI first month delivery (WTI crude oil delivered 1 month from a specific date) as a price benchmark, and crude oils delivered by tankers use WTI second month delivery (WTI crude oil delivered 2 months from a specific date) as a price benchmark. Crude oils are commonly classified by their density and sulfur content. The gravity of a crude oil is specified using the American Petroleum Institute (API) gravity standard, which measures the weight of crude oil in relation to water, which has an API gravity of 10 degrees. As shown in table 1, crude oil is generally classified as heavy (API gravity of 18 degrees or less), intermediate (API gravity greater than 18 and less than 36 degrees), and light (API gravity of 36 degrees or greater). In addition, crude oils vary by their sulfur content—crude oil is classified as sweet when its sulfur content is .5 percent or less by weight, and sour when its sulfur content is greater than 1 percent. Other natural characteristics, such as the presence of heavy metals and level of acidity, are also taken into account when classifying crude oils. In general, heavier and more sour crude oils require more complex and expensive refineries to process the oil into usable products but are less expensive to purchase than light sweet crude oils. Based on the API’s classification, California crude oils are almost all in the heavy and intermediate range. WTI, on the other hand, is a very light oil with an API gravity of just under 40. Table 1 shows the API classification and the API gravity of California’s three primary crude oils. The sale of crude oil primarily occurs through one of three types of transactions: a spot transaction, a contract arrangement, or as a futures contract. Spot transactions are agreements to sell or buy one shipment of oil at a price agreed upon at the time of the arrangement. Spot transaction prices in various regional markets are available through private publishers that monitor and record market transactions and prices. Oil is often traded in long-term contracts at prices that are tied to a market indicator, such as the spot market or the futures market. While most contract prices are set in reference to a market index or a benchmark crude oil, some domestically produced crude oils are also sold using posted prices, which are usually set by buyers, refiners, and gatherers, and apply to a particular crude stream (a crude oil or blend of oils of standardized quality). International crude oils sold through contract arrangements are generally priced using a formula that includes a base price, which is referenced to a market indicator, plus or minus a quality adjustment. A futures contract is a standardized agreement that obligates the holder of the contract to make or accept delivery of a specified quantity and quality of a crude oil during a specific month at an agreed upon price. Futures contracts are bought and sold on a commodities exchange, such as the New York Mercantile Exchange (NYMEX). However, unlike spot transactions and contract arrangements, futures contracts very rarely result in the delivery of physical barrels of oil. Instead, the contract may be satisfied by a cash settlement prior to contract expiration by selling or purchasing other contracts with terms that offset the original contract or by exchanging a futures contract for the commodity. From December 1987 to August 2006, price differentials between WTI and California crude oils fluctuated significantly, generally increasing since mid-2004 and reaching a high in January 2005. This recent increase in crude oil price differentials coincided with a general increase in world crude oil prices and reflected a more rapid increase in WTI prices relative to prices of the three California crude oils we evaluated (Kern River, Thums, and Line 63). Large price differentials also occurred in 2004, 2005, and 2006 for heavier crude oils imported into California, such as Maya and Arab Heavy. Since January 2005, the price differentials between WTI and these heavier California and imported crude oils have fallen somewhat from their peak in 2005 but remain large by historical standards. During the period from December 1987 through August 2006, all crude oil prices we evaluated tended to follow similar patterns, rising and falling in concert. However, the rate of increase or decrease in prices often varied by crude oil type and, consequently, the price differentials between these crude oils fluctuated. For example, California crude oil prices rose and fell in relation to WTI during the same period, with the higher quality Line 63 mirroring the price of WTI more closely than the lower grade Kern River and Thums. Specifically, the price differential between WTI and Kern River ranged from a low of $3.20 in July 1995 to a high of $14.99 in January 2005. Similar variable changes also occurred for the WTI-Thums price differential, which fluctuated between a low of $2.47 in June 1995 and a high of $13.92 in February 2005. For Line 63, the price differential was lowest in September 2000 at $0.84 and highest in January 2005 at $9.57. Fluctuations in prices for WTI, Kern River, Thums, and Line 63, as well as price differentials between WTI and the three California crude oils can be seen in figure 5. While numerous fluctuations in crude oil prices and crude oil price differentials have occurred over the 20-year period, global crude oil prices rose precipitously in mid-2004, with the price of WTI rising from $40.79 in July 2004 to $75.83 in August 2006––an increase of about 86 percent. This general rise in oil prices also occurred in California crudes, where prices for Line 63 rose from $41.44 in August 2004 to $70.72 in August 2006––an increase of about 71 percent, followed by Kern River and Thums, which rose from $40.45 and $41.41, respectively, in October 2004, to $63.32 and $65.02, respectively, in August 2006 ––both increases of about 57 percent. Because WTI rose faster than California crude oils, price differentials between California crude oils and WTI also increased during this period. The price differential for Line 63 rose from $6.54 in September 2004 to a peak of $9.61 in December 2004––an increase of about 47 percent. The price differential between Kern River and WTI rose from $5.95 in June 2004 to a peak of $14.99 in January 2005––an increase of about 152 percent. The price differential for Thums and WTI followed a similar pattern, rising from $7.13 in August 2004 to a peak of $13.92 in February 2005––an increase of about 95 percent. Crude oils imported into California, including Arab Heavy and Maya, followed a similar pattern of fluctuating prices and increasing price differentials during the same recent period. These intermediate crude oils compete with Kern and Thums in the California marketplace because of their similar quality and characteristics. Price differentials between WTI and Arab Heavy increased from $7.84 in June 2004 to a high of $16.24 in January 2005––an increase of about 107 percent. Price differentials for Maya and WTI were $8.39 in June 2004 and rose to a peak of $18.68 in March 2005––an increase of about 123 percent. Figure 6 provides an overview of the rise in prices for WTI, Arab Heavy, and Maya and price differentials between WTI and these imported crude oils from July 1988 to August 2006. Since mid-2005, price differentials for the three California crude oils and the two imported crude oils have moderated somewhat but remain high by historical standards. For example, the price differential for Kern River fell to $12.17 in August 2006 (the last month for which data was available), a decrease of about 19 percent from its high of $14.99 in January 2005. For the lighter California crude oil, Line 63, the price differential fell to $5.11 in August 2006, a decrease of about 47 percent from a peak of $9.61 in December 2004. The price differentials for Arab Heavy and Maya followed similar patterns. For example, the WTI-Arab Heavy price differential fell to $12.56 in August 2006, a decrease of about 23 percent from its high of $16.24 in January 2005. Nonetheless, all the crude oil price differentials between WTI and the heavier crude oils we evaluated remain high by historical standards. According to EIA officials and other crude oil market experts we interviewed, a range of market-based factors have affected recent crude oil price differentials. First, changing conditions and events in the global crude oil market influenced the relative prices of light and heavy crude oils, causing crude oil differentials between WTI and heavier crude oils to increase. Second, local and regional events that impacted specific regional crude oil markets affected crude oil prices and affected the price differential with WTI. This was particularly evident in oil production in the Rocky Mountain region in early 2006 when an increase in crude oil supplies and a lack of crude oil transportation capacity caused a decrease in prices and an increase in the price differential. In addition, the state of California has alleged in the past that crude oil producers in California manipulated prices lower to avoid making royalty payments. While most of the officials and experts we interviewed did not believe that California crude oil producers have recently engaged in this type of price manipulation, we cannot rule out this possibility or other possible factors that we could not observe that could explain some of the changes in price differentials. EIA and other officials we interviewed told us that price differentials between light and heavier crude oils are driven primarily by supply and demand economics in the global crude oil and petroleum products markets and stated that these factors have influenced recent trends in price differentials between heavy California crude oils and the light crude oil benchmark WTI. For example, increases in the supply of light crude oil result in lower prices for those crude oils, which would decrease the price differential in comparison to heavy crude oil, such as those oils typically produced in California. Conversely, an increase in the supply of heavy crude oil can result in lower prices for those crude oils, thus increasing the price differential between heavy crude oils and WTI. For example, according to EIA officials, between January 2003 and January 2005, world demand for crude oil increased substantially, in China and the United States in particular and in response, crude oil producers in the Middle East increased their production of heavy crude oil to meet the rising overall demand for crude oils. EIA officials and others stated that this caused prices of WTI to rise at a faster rate than heavy crude oils and contributed to rising price differentials between WTI and heavier crude oils such as those produced in California. EIA officials also told us that when crude oil prices increase, as they have in recent years, prices of lighter petroleum products, such as gasoline and diesel, rise faster than prices of residual fuel oils and other heavier crude oils because the latter products face greater competition from coal and natural gas, which are not initially affected by increases in crude oil prices. Because heavier crude oils typically generate a greater proportion of heavier petroleum products than do lighter crude oils, the value of the heavier crude oils falls relative to lighter crude oils. This causes the price differentials between WTI and heavier oils to rise further. Both of these factors helped push the price of heavy crude oils lower in relation to light crude oils. Specifically, between January 2003 and January 2005, the price of WTI increased by about 42 percent, while the price of Kern increased by about 16 percent. Consequently, the price differential between these two crude oils expanded from about $6 to about $15. Local and regional events, such as hurricanes off the U.S. Gulf Coast and refinery outages, can cause fluctuations in the price of crude oils produced in the region and benchmark crude oils. Consequently, these events can increase or decrease price differentials. These events are tracked by analysts in the private sector crude oil markets, financial markets, and the federal government. From 1970 through the end of 2005, EIA examined 72 different events and their effects on crude oil prices, such as the Organization of Petroleum Exporting Countries oil embargo in 1973, the terrorist attacks of September 11, 2001, and the multiple hurricanes that struck the U.S. Gulf Coast in 2004 and 2005. For example, when Hurricane Ivan hit the Gulf of Mexico region in September 2004, oil tankers importing crude oil into the Gulf were delayed, and oil producers were forced to evacuate 3,000 employees from the region. MMS estimated that Hurricane Ivan caused crude oil production to decrease by 61 percent and resulted in spikes in the price of WTI. This would have increased the price differential between WTI and other crude oils, including those California crude oils we evaluated. In addition, in early 2006, the price differential of local crude oils in the Rocky Mountain region rose to an unusual extent. The increase was most pronounced with the price differential between WTI and Wyoming Sweet, a regionally produced crude oil with a gravity and sulfur content very similar to WTI. From 1988 through mid-2005, the price of Wyoming Sweet was roughly equal to WTI, with price differentials ranging between zero and $3. However, beginning in January 2006, the price of Wyoming Sweet dropped suddenly. Consequently, the price differential between Wyoming Sweet and WTI increased from about $2 in the beginning of 2004 to over $24 in February 2006. In contrast to California, where crude oil prices and price differentials to WTI have experienced regular fluctuations, there was no historic precedent for crude oil price differentials of this magnitude occurring in the Rocky Mountain region. Although the Wyoming Sweet price differential has since fallen to less than $10, this is still unusually high for this region. Figure 7 shows prices for WTI and Wyoming Sweet and their price differential between December 1987 and August 2006. State officials and officials representing crude oil producers in the region told us that the principal cause of the expanding Wyoming Sweet price differential was inadequate crude oil transportation infrastructure. In 2005, crude oil production in this region increased, and Canadian producers also increased imports into the region. However, the existing pipeline, railroad, and trucking infrastructure for transporting crude oil was insufficient to move this large influx of crude oil out of the Rocky Mountain region to other markets where it could have received a higher price. The resulting oversupply of crude oil in a region with comparatively low demand prevented the price of the regional crude oils from increasing similar to WTI prices, causing a large price differential. State officials we interviewed predicted that, until transportation infrastructure can be expanded, price differentials for oils produced in the Rocky Mountain region will continue to be above the historical trends. Market manipulation is a final factor that could cause crude oil price differentials to increase. In the past, the state of California alleged that crude oil companies in California manipulated crude oil prices to lower their royalty payments to the federal government. While we did not find any evidence that any market players had manipulated crude oil prices in California or elsewhere during the recent period of increasing crude oil price differentials, we cannot rule out this or other possible factors or events that we could not observe that could explain some of the changes in price differentials. The sales price of crude oil is an important variable in the equation that determines the amount of royalties paid by oil companies who produce crude oil on federal lands. Royalty revenues are calculated using the following formula: Volume of Crude Oil Sold X Sales Price X Royalty Rate = Royalty Revenues Consequently, changes in either the sale prices or the volume sold can greatly affect the total amount of royalties oil companies pay and the states receive. Historically, posted prices were widely accepted as the true market value and the measure that should be used in determining royalty payments by crude oil producers, refiners, state governments, and the federal government. In litigation starting in 1975 and continuing through 1995, the state of California and the city of Long Beach alleged that seven major oil producing companies had conspired to keep posted prices low and that their posted prices did not reflect the true market value of their crude oil, thus illegally reducing the amount of royalties the oil companies paid. Six of the companies eventually settled their cases, while the seventh went to trial and was exonerated. Although MMS was not a party to this litigation, it continued to independently evaluate whether posted prices reflected market value. In June 1994, MMS formed an interagency task force with some of the agencies that had previously reviewed the issue, including the Departments of Energy, Justice, and Commerce, to evaluate documents from the litigation and other data and determine whether the companies had wrongfully undervalued crude oil to avoid paying royalties. In May 1996, the task force concluded, among other things, that (1) oil companies in California typically received proceeds higher than posted prices and, therefore, royalties were underpaid and (2) much of the crude oil produced in California was not sold as contemplated in the royalty revenue formula, but rather moved through various transfers or exchanges either within a company that owned both the production and refinery operations, or between two companies for purposes of reducing transportation costs. Consequently, the reported sale price was frequently lower than actual market prices. In March 2000, MMS changed its regulations for valuing crude oil from federal lands to address the conclusions of the task force. Among other things, the regulations changed for determining the value of crude oil sold in a “non-arms length” transaction––crude oil transferred within an oil company between its production and refining affiliates. Currently, royalties for these non-arms length transactions are calculated using a sales price that is imputed based on the price of Alaska North Slope (in California) or NYMEX (for the rest of the country) and adjusted for differences in quality. In arms-length transactions––sales between two separate and unaffiliated companies––the actual sale price, and not the posted price, is used to calculate royalties. In the course of our work, most of the officials and experts we interviewed thought the new MMS regulations were effective in addressing this problem and neither believed that crude oil producers were engaging in this sort of price manipulation during the recent period of increasing crude oil price differentials, nor did they provide any evidence of such manipulation. However, we cannot rule out this or other possible factors or events that we could not observe that could explain some of the changes in price differentials. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to interested congressional committees and Members of Congress, the Secretary of Energy, and the California State Controllers Office. We also will make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3841 or wellsj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix II. The objectives of this review were to determine (1) the extent to which crude oil price differentials in California have fluctuated over the past 20 years and (2) the factors that may explain the recent changes in the price differential between California’s crude oil and others. As part of the second objective, in order to provide additional context to the issue of price differentials in California, we also evaluated the unusually high crude oil price differentials that occurred in the Rocky Mountain region in late 2005. To determine the extent to which California crude oil price differentials have fluctuated over time, we obtained data on the spot prices of the North American benchmark crude oil, West Texas Intermediate (WTI), and three California crude oils: two heavy crude oils (Kern River and Thums) and an intermediate crude oil (Line 63). We also obtained price data for two heavy crude oils that are imported into California in large volumes: Arab Heavy, a Saudi Arabian crude oil, and Maya, a crude oil imported from Mexico. These data included prices from December 1987 through August 2006. While most of the data we obtained listed a monthly average price, some crude streams used daily or weekly averages. In these instances, we calculated the monthly average price in order to make appropriate comparisons. We used this data to calculate price differentials by subtracting the price for the subject crude oil from the price of the benchmark crude oil and analyzing these differentials for trends over time. We interviewed officials from the Energy Information Administration (EIA), Minerals Management Service (MMS), and the California Energy Commission (CEC) to get background information on the major crude oils produced in California and imported into the region. To identify factors that may explain the recent changes in the California price differentials, we (1) interviewed key officials and experts, (2) reviewed studies on crude oil prices and price differentials, and (3) reviewed historical studies and interviewed agency officials about the history of crude oil price manipulation in California. To better understand the key factors that affect crude oil price differentials in general and specifically in California, we interviewed federal agency officials from EIA and MMS; state agency officials from CEC and the California State Controller’s Office; and experts from organizations representing crude oil producers and refiners, including the California Independent Petroleum Association (CIPA), the Western States Petroleum Association (WSPA), and the Independent Petroleum Association of America. We reviewed studies, reports, and presentations on crude oil pricing and differentials written by or produced for EIA, MMS, CEC, CIPA, and WSPA. We also reviewed a study prepared for the California State Controller’s Office on crude oil price differentials in California, written by IIC Inc., and interviewed its author. To evaluate the issue of crude oil price manipulation in California, we reviewed documents, regulations, and studies from the 1980s and 1990s regarding the history of allegations of oil producers manipulating prices to avoid making royalty payments. We also interviewed officials with the California State Controller’s Office, MMS, CIPA, and WSPA, regarding the history of manipulation in California, and whether they believed or had evidence that such price manipulation might have occurred in the recent period of unusually high price differentials. We did not seek to acquire proprietary records on the prices received for sales of crude oil from crude oil producers or their buyers for this engagement. To evaluate the unusually high price differentials in the Rocky Mountain region, we obtained data on the spot price of Wyoming Sweet, a light sweet crude oil similar in quality to WTI. We used this data to calculate price differentials by subtracting the monthly average price for Wyoming Sweet from the monthly average price of the WTI and analyzed these differentials for trends over time. To understand the causes of the high price differential in the Rocky Mountain region and to learn what stakeholders in the region are doing to address the issue, we interviewed officials with the Wyoming Pipeline Authority, the North Dakota Petroleum Council, the Interstate Oil and Gas Commission, the Colorado Oil and Gas Commission, and oil producers and refiners in the region. We conducted our work between May and December 2006 in accordance with generally accepted government auditing standards. In addition to the individual listed above, Frank Rusco, Assistant Director; Jeffrey Barron; Casey Brown; Alison O’Neill; Kim Raheb; Barbara Timmerman; and Wilda Wong made key contributions to this report.
California is the nation's fourth largest producer of crude oil and has the third largest oil refining industry (behind Texas and Louisiana). Because crude oil is a globally traded commodity, natural and geopolitical events can affect its price. These fluctuations affect state revenues because a share of the royalty payments from companies that lease state or federal lands to produce crude oil are distributed to the states. Because there are many varieties and grades of crude oil, buyers and sellers often price their oil relative to another abundant, highly traded, and high quality crude oil called a benchmark. West Texas Intermediate (WTI), a light crude oil, is the most commonly used benchmark in the United States. The price difference between a crude oil and its benchmark is commonly expressed as a price differential. In fall 2004, crude oil price differentials between WTI and California's heavier, and generally lower valued, crude oil rose sharply. GAO was asked to examine (1) the extent to which crude oil price differentials in California have fluctuated over the past 20 years and (2) the factors that may explain the recent changes in the price differential between California's crude oil and others. GAO analyzed historical data on California and benchmark crude oil prices and discussed market trends with state and federal government officials and crude oil experts. California crude oil price differentials have experienced numerous and large fluctuations over the past 20 years. The largest spike in the price differential began in mid-2004 and continued into 2005, during which the price differential between WTI and a California crude oil called Kern River rose from about $6 to about $15 per barrel. This increase in the price differential between WTI and California crude oils occurred in a period of generally increasing world oil prices during which prices for both WTI and California crude oils rose. Differentials between WTI and other oils also expanded in the same time period. The differentials have since fallen somewhat but remain relatively high by historical standards. Recent trends in California crude oil price differentials are consistent with a number of changing market conditions. First, beginning in mid-2004, Middle East producers began to increase the supply of heavy crude oils in the world marketplace, which helped depress prices for heavy crude oils, including those produced in California, and contributed to the expanding price differential between California crude oils and WTI. Second, the price differential of California crude oils to WTI increased when the rise in global crude oil prices caused prices of light crude oils to increase faster than the prices of heavier crude oils. This occurred because the petroleum products from heavy crude oils compete against other fuels, such as coal. Third, events that only impact regional crude oil markets or individual crude oils can also affect price differentials. For example, in September 2004, Hurricane Ivan disrupted crude oil production in the U.S. Gulf Coast region, resulting in decreases in the region's crude oil supply. The resulting scarcity of crude oil in the Gulf Coast region caused the prices of WTI and other regional oils to increase relative to crude oils produced outside the region. This also would have increased the price differentials between WTI and California crude oils. Finally, manipulation of crude oil prices could also affect price differentials, but experts and officials GAO interviewed generally believed that this was not a factor during this recent period.
For the past several years, DOD has planned and budgeted for about 1.4 million active military personnel active duty forces: about 482,400 in the Army; 359,300 in the Air Force; 373,800 in the Navy; and 175,000 in the Marine Corps. These active duty personnel levels have been generally stable since the mid-1990s, when forces were reduced from their Cold War levels of almost 2 million active military personnel. Active duty personnel are considered to be on duty all the time. Congress authorizes annually the number of personnel that each service may have at the end of a given fiscal year. This number is known as authorized end strength. Certain events, such as changes between planned and actual retention rates, may cause differences between the congressionally authorized levels and the actual numbers of people on board at the end of a fiscal year. Table 1 shows the congressionally authorized levels for fiscal years 2000 through 2005 as compared with the services’ military personnel actually on board at the end of fiscal years 2000 through 2004. As table 1 shows, the Army and the Air Force exceeded their authorized end strengths by more than 3 percent in fiscal years 2003 and 2004. The Secretary of Defense has statutory authority to increase the services’ end strengths by up to 3 percent above authorized levels for a given fiscal year if such action is deemed to serve the national interest. In addition, if at the end of any fiscal year there is in effect a war or national emergency, the President may waive end strength authorization levels for that fiscal year. On September 14, 2001, the President declared a state of national emergency and delegated end strength waiver authority to the Secretary of Defense. Since then, the President has annually renewed the national state of emergency as well as end strength waiver authorities specified in Executive Order 13223. In January 2004, the Secretary of Defense exercised the President’s authority and increased temporarily the Army’s end strength by 30,000 for fiscal years 2004 through 2009 to facilitate the Army’s reorganization while continuing ongoing operations. While the Secretary of Defense’s goal is ultimately to have the Army return to an active military personnel level of 482,400 by 2009, in October 2004, Congress increased the fiscal year 2005 end strength of the Army by 20,000 personnel, the Marine Corps by 3,000, and the Air Force by 400. Congress also authorized additional authority for increases of 10,000 active Army personnel and 6,000 Marines through fiscal year 2009. In contrast, Congress reduced authorized end strength for the Navy by 7,900 personnel from the fiscal year 2004 level. Moreover, Congress directed that for fiscal year 2005, the Army will fund active military personnel increases in excess of 482,400 and the Marine Corps will fund active military personnel increases in excess of 175,000 through either a contingent emergency reserve fund or an emergency supplemental appropriation. Our prior work has shown that valid and reliable data about the number of employees an agency requires are critical in preventing shortfalls that threaten its ability to economically, efficiently, and effectively perform its missions. Although OSD has processes through which it issues policy and budget guidance that set the overall priorities for defense activities, including personnel levels, it does not have a process for comprehensively analyzing the active personnel levels needed to execute the defense strategy within acceptable levels of risk. OSD has not placed an emphasis on reviewing active military personnel requirements, focusing instead on limiting personnel costs in order to fund competing priorities such as transformation. The services have processes to establish their active duty requirements; however, service processes vary in their methodologies, and several major reviews are still ongoing and have not fully identified long- term personnel needs. Although OSD has performed some reviews related to active duty levels, it does not review the services’ results in a systematic way to ensure that decisions on the numbers of active military personnel are driven by data that make clear the links between personnel levels and strategic goals, such as the defense strategy. Conducting such a review could enable OSD to more effectively demonstrate how the services’ requirements for active military personnel provide the capabilities to execute the defense strategy within an acceptable level of risk. Further, OSD could provide more complete information to Congress about how active military personnel requirements for each service are changing and about the implications of changes for future funding and budget priorities. The quadrennial review of the defense program planned for 2005 represents an opportunity for a systematic analysis and reevaluation of personnel levels to ensure that they are consistent with the defense strategy. Our prior work has shown that valid and reliable data about the number of employees an agency requires are critical if the agency is to spotlight areas for attention before crises develop, such as human capital shortfalls that threaten an agency’s ability to economically, efficiently, and effectively perform its missions. We have designated human capital management as a governmentwide high-risk area in which acquiring and developing a staff whose size, skills, and deployment meet agency needs is a particular challenge. To meet this challenge, federal managers need to direct considerable time, energy, and targeted investments toward managing human capital strategically, focusing on developing long-term strategies for acquiring, developing, and retaining a workforce that is clearly linked to achieving the agency’s mission and goals. The processes that an agency uses to manage its workforce can vary, but our prior work has shown that data-driven decision making is one of the critical factors in successful strategic workforce management. High- performing organizations routinely use current, valid, and reliable data to inform decisions about current and future workforce needs, including data on the appropriate number of employees, key competencies, and skills mix needed for mission accomplishment, and appropriate deployment of staff across the organizations. In addition, high-performing organizations also stay alert to emerging mission demands and remain open to reevaluating their human capital practices. Changes in the security environment and defense strategy represent junctures at which DOD could systematically reevaluate service personnel levels to determine whether they are consistent with strategic objectives. In 1999, Congress created a permanent requirement for DOD to conduct a review of the defense program every 4 years and to report on its findings. During these reviews, DOD is required, among other things, to define sufficient force structure and “other elements” of the defense program that would be required to execute successfully the full range of missions called for in the national defense strategy and to identify a budget plan that would be required to provide sufficient resources to execute successfully these missions at a low-to-moderate level of risk. The quadrennial review of the defense program thus represents an opportunity for DOD to review elements related to force structure, such as the total numbers of military personnel, both active duty and reserve, that are needed to execute the defense strategy most efficiently and effectively. Based on the legislative requirements, DOD plans to conduct a quadrennial review in 2005 and publish its next report in 2006. The terrorist attacks of September 11 changed the nation’s security environment. Shortly thereafter, DOD issued a new national defense strategy in its 2001 Quadrennial Defense Review Report. The 2001 report outlined a new risk management framework consisting of four dimensions of risk—force management, operational, future challenges, and institutional—to inform the consideration of trade-off decisions among key performance objectives within resource constraints. According to DOD’s Fiscal Year 2003 Performance and Accountability Report, these risk areas are to form the basis for DOD’s annual performance goals. In November 2004, DOD reported its performance results for fiscal year 2004, noting that it met some of its performance goals associated with these risk management areas. Our prior work suggests that agency leaders can use valid and reliable data to manage risk by highlighting areas for attention before crises develop and to identify opportunities for improving agency results. As agency officials seek to ensure that risk remains balanced across their agencies’ activities and investments, they can adjust existing performance goals. Likewise, they can create new performance goals to gather data about critical activities. OSD has recognized that the active and reserve forces have been challenged to provide ready forces for current operations, including high demand for some support skills, such as civil affairs and military police, and is taking steps to achieve a number of objectives, such as improving the responsiveness of the force and helping ease stress on units and individuals with skills in high demand. For example, the Secretary of Defense, in a July 9, 2003, memorandum, directed the services to examine how to rebalance the capabilities in the active and reserve forces. The services had already undertaken a number of steps to address requirements for high-demand skills sets as a part of their ongoing manpower management analyses. For example, in 2002 the Army began planning for fiscal years 2004 through 2009 to address high-demand areas, such as military police. However, the Secretary’s memorandum accelerated the services’ rebalancing efforts, which are critical to establishing requirements for active personnel. OSD provides policy and budget guidance to the services on balancing the costs of active personnel with other funding priorities, such as transformation. Its approach to active personnel levels has been to limit growth and initiate efforts to use current military personnel levels more efficiently. OSD also conducts a number of ongoing and periodic reviews and assessments related to active military personnel levels, although these do not represent a systematic analysis of requirements for active military personnel needed to perform missions in the nation’s defense strategy. For example: The 2001 Quadrennial Defense Review Report identified the defense strategy that guided DOD’s analysis of the force structure. Under the new strategy, DOD reported it needed sufficient forces to defend the homeland, provide forward deterrence, execute warfighting missions, and conduct smaller-scale contingency operations. Thus, DOD shifted its force planning approach from optimizing the force for conflicts in two particular regions to providing a range of capabilities for a variety of operations, wherever they may occur. The report concluded that the department’s existing force structure was sufficient to execute the redesigned defense strategy at moderate risk, but it did not refer to a specific OSD-led analysis to reassess the impact of the new capabilities-based planning on the numbers of military personnel needed. DOD officials said that the 2005 review may include a top- down review of end strength but did not provide further details about the specific guidance to implement such a review. To ensure that the services comply with congressionally authorized active personnel levels on duty at the end of a fiscal year, OSD monitors service reports for personnel on board. According to some service officials, managing personnel levels is challenging for the services because they cannot always control personnel management factors, such as retention and retirement rates, which are affected by servicemembers’ personal decisions. Compliance with authorized personnel levels is one of the performance metrics that DOD presents in its Performance and Accountability Report. According to the department’s fiscal year 2003 report, for fiscal years 1999 through 2002, DOD met its goal of not exceeding authorized levels by more than 2 percent. However, in fiscal year 2003, the Army and the Air Force did not meet this goal, having exceeded authorized limits by 4 percent and 4.5 percent, respectively, in order to maintain sufficient troops to fight a global war on terrorism. Air Force officials told us that better than expected recruitment and retention of personnel also caused the Air Force to exceed the authorized limit. Also, the Army’s exercise of stop loss authority, which prevents servicemembers from leaving the service even if they are eligible to retire or their service contracts expire, may have contributed to its overages in fiscal year 2003. DOD’s fiscal year 2004 Performance and Accountability Report shows that the Army and the Air Force again did not meet the performance goal, exceeding authorized levels, by 3.7 percent and 5.7 percent, respectively; the Navy and the Marine Corps did meet the goal. While this measure is important for compliance with congressional direction, it cannot be used to determine whether the active force has enough personnel to accomplish its missions successfully because it does not assess the extent to which end strength levels meet the nation’s defense strategy at an acceptable level of risk. DOD’s annual report to Congress on manpower requirements for fiscal year 2005 broadly states a justification for DOD’s requested active military personnel, but it does not provide specific analyses to support the justification. Instead, the report provides summaries on personnel, such as the allocation of active military personnel between operating forces and infrastructure. Although the types of operating forces are specified, for example, “expeditionary forces,” the specific capabilities associated with such forces are not identified. The report also provides the active military personnel data for the near term—fiscal years 2003 through 2005; it does not, however, contain data for the department’s long-term planned allocations. Although the report stated that DOD will continue to review the adequacy of military capabilities and associated end strength requirements, the Office of the Secretary of Defense (including the Offices of Policy; Personnel and Readiness; Comptroller; and Program Analysis and Evaluation), Joint Staff, and some service officials we interviewed could not identify a specific process or an OSD-led analysis in which the department reexamines the basis for active military personnel levels. OSD coordinates with the services on active personnel levels throughout the department’s planning and budgeting cycle, but it has not established a process that would enable it to periodically examine the requirements for the active military personnel and ensure that decisions are driven by data that establish links between active military personnel levels and key missions required by the national defense strategy. For example, OSD does not systematically examine or validate the services’ methodologies or results to assess personnel levels across the active force. Further, OSD does not systematically collect data that would enable it to monitor how the services are allocating personnel to key warfighting and support functions. Although each of the services has processes for assessing military personnel requirements, the extent to which OSD has analyzed the results of these processes is not clear. The services use different methodologies for assessing their active personnel requirements, and their processes and initiatives have different time frames. In addition, several key efforts to assess requirements are still ongoing and have not yet identified long-term requirements. These processes are described below. The Army generates personnel requirements through a biennial process known as Total Army Analysis. During the initial stages of this process, Army force planners assess the numbers and types of units needed to carry out missions specified in DOD and Army guidance. The most recent Total Army Analysis—called Total Army Analysis 2011 because it provides the force structure foundation for the Army’s fiscal year 2006 through 2011 planning cycle—included analyses of operating force requirements for homeland defense, major combat operations, forward deterrence, and ongoing military operations, among others, as well as for personnel who operate installations and provide support services. During the subsequent resourcing phase of the process, Army officials determine how best to allocate the limited number of positions authorized by OSD among active and reserve component units across the Army’s force structure to minimize risk to priority missions. The Army completed an initial version of Total Army Analysis 2011 in spring 2004, but the Army continues to assess requirements as it carries out changes to its basic force structure. These changes, discussed in the Army Campaign Plan, alter the way in which the service organizes and staffs its combat forces, and therefore will have significant impacts on the numbers of active personnel the Army will need. In place of the existing 10 active divisions, each comprising about 3 combat brigades and associated support units, the Army’s new force structure will be based on modular combat brigades, each with its own support units. Under current plans, when the new structure is fully implemented in 2006, the Army will have 43 combat brigades, an increase of 10 brigades from the 33 it had under the traditional divisional structure. Although the Army has begun implementing plans to create a modular force structure, several aspects of these plans are uncertain or have yet to be determined. For example, the Army is considering increasing the number of brigades it will have from 43 to 48. This increase could require approximately 17,000 to 18,000 more personnel depending on the types of brigades established. The Army plans to make the decision by fiscal year 2006 based on resource considerations as well as the status of ongoing military operations. Further, while the Army has developed personnel requirements for its planned modular combat brigades, it has not yet determined the overall number or composition of all the support units, such as reconnaissance, which it will need to support those brigades. The Navy uses two separate processes to assess and validate requirements for its operating forces and its infrastructure forces. While the processes vary in scope and methodology, both are based on activities’ workloads. For operating forces, such as ships and aviation squadrons, workloads are based on each ship’s or squadron’s mission, the capabilities needed to execute the mission, and the conditions under which the mission is to be carried out. For shore-based infrastructure forces, personnel requirements are based on the numbers and types of personnel required to accomplish each activity’s workload. As we have reported, the Navy has had difficulty in past years justifying its shore requirements because it has not evaluated alternative combinations of people, materiel, facilities, and organizational structures to ensure that the most cost-effective combination of resources is used. The Navy is also conducting discrete analyses to identify opportunities to reduce military personnel requirements in support of projected end strength reductions. By reducing end strength, the Navy aims to free up funds for fleet modernization. For example, it initiated studies on providing services, such as meteorological support, financial management, religious ministries, and recruiting. To achieve the lower cost, the Navy is examining alternatives to military manpower—for example, using technology or hiring private-sector contractors instead of using military personnel. Furthermore, these studies aim to identify ways to eliminate redundant activities by consolidating activities that provide similar services. Under a separate set of reviews, the Navy is scrutinizing positions in selected career fields such as supply support to determine whether these positions are military essential or, rather, could be staffed with civilian employees. The Air Force’s current process for determining the manpower it needs is focused on the requirements for training objectives and for operating and maintaining bases and weapons systems in peacetime. However, the Air Force is in the process of overhauling its manpower requirements process to determine the capabilities it needs for its role as an expeditionary force able to respond rapidly to worldwide contingencies. The Air Force plans to continue using the same methodological techniques (e.g., time studies, work sampling, computer simulation, and aircrew ratios) to quantify its personnel requirements, but the new process will include both the infrastructure personnel and the warfighting force needed to support expeditionary missions. The Air Force expects to implement its new capabilities- based approach by fiscal year 2008. The Marine Corps uses modeling, simulations, spreadsheet analysis, and other analytic tools to identify gaps in its capabilities to perform its missions, and then identifies the personnel and skills needed to provide the capabilities based largely on the professional judgment of manpower experts and subject matter experts. The results are summarized in a manpower document, updated as needed, which is used to allocate positions and personnel to meet mission priorities. The Marine Corps’ analyses for fiscal year 2004 indicated that to execute its assigned missions the Corps needs about 9,600 more personnel than it has on hand. The 2005 National Defense Authorization Act authorized the Marine Corps to increase its end strength by 3,000 personnel to 178,000 in 2005 and to increase by an additional 6,000 personnel between fiscal year 2005 and 2009. Although the security environment has changed since 2001, OSD has not conducted a systematic review of the services’ analyses and allocation of personnel. Consequently, OSD cannot ensure that the end strength levels established in its strategic and fiscal guidance reflect the numbers of personnel needed to execute the defense strategy. Further, it cannot ensure that it has a sufficient basis for understanding the risks associated with different levels of active military personnel. While too many active personnel could be inefficient and costly, having too few could result in other negative consequences, such as inability to provide the capabilities that the military forces need to deter and defeat adversaries. If OSD had a data-driven rationale to support its funding requests for specific end strength levels, it could provide congressional decision makers more complete information on how active duty personnel requirements are linked to the budget, the force structure, and the defense strategy. OSD hopes to avert the need to increase active personnel levels by making more efficient use of the current active military personnel within each service, although it does not have a comprehensive plan for how it will accomplish this and progress in implementing initiatives designed to make more efficient use of active military is lagging behind goals. Specifically, OSD has not developed a comprehensive plan for managing the initiatives because the officials assigned this responsibility have competing demands on their time and resources and have not made this a priority. Sustained leadership and a plan for implementing initiatives and measuring progress can help decision makers determine if initiatives are achieving their desired results. In addition, although OSD has identified two near-term initiatives-–military-to-civilian conversion and retraining for high-demand skills-–current data indicate that the initiatives are not meeting their quantitative goals or prescribed time frames because funding has not been identified and time frames have not taken into account hiring and training factors. OSD has approved quantitative goals and time frames for the near- term initiatives, and the services are taking steps to implement them, but OSD does not have an implementation plan that assigns responsibility for ensuring that the initiatives are implemented, identifies resources needed, monitors progress, and evaluates their results. In addition to the near-term goals, OSD has identified some long-term initiatives to reduce the need for active personnel, such as using technology more extensively, although these initiatives are not fully developed, and it hopes to identify additional ways to use active personnel more efficiently. However, without a comprehensive plan to manage implementation of its initiatives and assess their results, OSD may be unable to determine whether the initiatives have the desired effect of providing more military personnel for warfighting duties, thus averting the need for more active personnel. Consequently, OSD may not be able to track the progress of these initiatives and keep Congress informed on the results of its initiatives to use active military personnel more efficiently. According to officials in the Office of the Secretary of Defense’s Under Secretaries for Policy and for Personnel and Readiness, in summer 2003, the Secretary of Defense directed the Deputy Under Secretary of Defense for Policy to develop a plan for managing initiatives that might either directly or indirectly use the active force more efficiently. However, no personnel were assigned responsibility for carrying out the Secretary’s tasking, in part because of competing demands on staff time, such as developing DOD’s Strategic Planning Guidance. According to a senior official in the Policy office, OSD and service officials met soon after the Secretary directed more efficient use of the force to discuss how to provide oversight of the initiatives. However, other work demands took priority, and they did not continue the meetings. In spring 2004, oversight responsibilities for the initiatives were transferred to the Office of the Under Secretary of Defense for Personnel and Readiness, according to an official in the Office of Policy. Even after the transfer, OSD officials could not identify an individual with responsibility for creating and implementing a plan to organize, monitor, and evaluate the initiatives. In response to an inquiry by the Secretary of Defense, in fall 2004 an official in the Office of the Under Secretary of Defense for Personnel and Readiness told us that he had begun developing ways to determine the results of the services’ efforts to achieve workforce efficiencies. However, at the time of our review, the methodology for the analysis was still being developed. Management principles embraced in the Government Performance and Results Act of 1993 provide agencies a framework for implementing and managing programs. One principle is the need to describe detailed implementation actions as well as measurements and indicators of performance (i.e., a performance plan). Critical elements of such a plan include identifying resources and developing mechanisms to measure outcomes and means to compare program results to performance goals. In addition, sustained leadership in managing this plan is needed to ensure that program objectives are implemented in accordance with the plan, compare program results to performance goals, and take corrective actions as needed. Without oversight of the services’ implementation of initiatives to make more efficient use of military personnel, DOD cannot be sure that changes are having the desired effect. For example, the department recently reported that it had completed its initiative to reduce and maintain the numbers of active personnel assigned to headquarters units by 15 percent from their 1999 levels. Although the objective of the initiative was to increase the numbers of military personnel assigned to warfighting duties, DOD did not collect data on how the military positions that had been assigned to headquarters units were assigned after the reductions. Therefore, DOD could not demonstrate whether headquarters staff reductions directly resulted in more active military personnel being available for the services’ warfighting duties and could not assess the magnitude of any changes. While the services are in the process of implementing OSD’s initiatives to use active personnel more efficiently in the near term, such as converting military positions to civilian or contractor performance and rebalancing the active and reserve component skills, the initiatives are not meeting planned goals and time frames and are not having the desired results. The services have made some progress in converting positions to civilians or contractors and reassigning positions to high-demand skills, but most of the services did not meet their quantitative goals or time frames for 2004 because of funding issues and delays in recruiting, hiring, contracting, and training personnel. Without a plan for overseeing the services’ implementation of the initiatives, including assigning responsibility and collecting data, OSD cannot assess progress toward meeting the goals of moving military positions to the operating forces and take corrective action, if needed. Although DOD has begun to implement its two major initiatives, it had not developed clear implementation plans for initiatives, which have longer- term or less direct effects on active military personnel who may be needed for warfighting duties. While some of the long-term initiatives focus on specific problems and issues, such as relying more on volunteers, others are concepts the department would like to explore more fully over time, such as greater use of varied technologies. Senior OSD officials acknowledged that these initiatives are not yet completely developed and do not yet have detailed implementation plans that identify time frames, resources, and measures of success. Moreover, some long-term initiatives may take years to implement, according to these officials. Consequently, it may take years before OSD is in a position to determine whether these long-term initiatives will yield the expected increases of active duty military personnel for warfighting duties. For example, senior OSD officials identified global reachback, other new technologies, and volunteerism as initiatives the department is exploring, as described in table 3. The services targeted some technologies that they can use to reduce the need for support personnel in the near term, although these may not be implemented as planned because technologies are supplied first to operational needs. For example, to reduce its reliance on about 3,000 Army National Guard and Reserve personnel who provide security-related functions at Air Force bases, the Air Force planned to use ground surveillance radars and thermal imagers to detect intruders and improve surveillance in fiscal year 2004. However, the Air Force has not yet outfitted its domestic military bases with this equipment because it was needed for use on Army bases in Iraq and Afghanistan. As a result, the Air Force must continue to rely on servicemembers for security until at least fiscal year 2006, when it plans to buy additional equipment. For the long term, Navy officials anticipate that acquisition of new technology related to transformation could reduce the manpower it needs. For example, the Navy expects the number of military personnel it will need to operate and support its new DD(X) destroyer will be smaller than the number needed to operate destroyers currently in use. However, Navy officials have not yet determined the exact crew size for the new destroyers. The changing security environment and high pace of operations resulting from the global war on terrorism have led to significant debate about whether there are a sufficient number of active military servicemembers to carry out ongoing and potential future military operations. Although DOD defined a new defense strategy that broadens the range of capabilities that may be needed at home and around the world, OSD has not undertaken a systematic, data-based analysis of how the changed strategy is linked to personnel levels, so it cannot demonstrate how the current number of active personnel supports the department’s ability to execute the strategy within acceptable levels of risk. While personnel costs compete with costs of other priorities, without data-driven analyses of the forces—including active and reserve personnel—and how they are linked to the strategy, DOD is limited in its ability to determine how best to invest its limited resources over the near and long term between the competing priorities of personnel costs and other needs. The department will also continue to face challenges in achieving consensus with Congress on active duty end strength levels, in light of the demands imposed by ongoing operations and the significant changes in the security environment, until it can better demonstrate the basis for its budget requests. Although DOD has taken several positive steps to achieve its goal of using active military personnel more efficiently, its initiatives may lose momentum without additional management attention. Further, DOD cannot assess whether the initiatives are having the desired effect of providing more active personnel to warfighting positions because it has not developed an implementation plan to coordinate and oversee the initiatives put forward as contributing to more efficient use of active forces. Until DOD develops a plan that assigns responsibilities, clarifies time frames, identifies resources, and sets out performance metrics, DOD will not be able to assess the progress of the initiatives and take corrective action, if necessary. Moreover, in the absence of specific monitoring and reporting, DOD will not be able to inform Congress on the extent to which its efficiency initiatives are enabling it to meet emerging needs and better manage the high pace of operations that U.S. active forces are currently experiencing. To facilitate DOD’s and congressional decision making on active military personnel levels and to help DOD achieve its goals for assigning a greater proportion of active military personnel to positions in the operating force, we recommend that the Secretary of Defense take the following two actions: Conduct a review of active military personnel requirements as part of the 2005 Quadrennial Defense Review. This review should include a detailed analysis of how the services currently allocate active military forces to key missions required by the defense strategy and should examine the need for changes in overall personnel levels and in the allocation of personnel in response to new missions and changes in the defense strategy. The Secretary of Defense should summarize and include in its Quadrennial Defense Review report to Congress DOD’s conclusions about appropriate personnel levels for each of the services and describe the key assumptions guiding DOD’s analysis, the methodology used to evaluate requirements, and how the risks associated with various alternative personnel force levels were evaluated. Develop a detailed plan to manage implementation of DOD’s initiatives to use the force more efficiently. Such a plan should establish implementation objectives and time frames, assign responsibility, identify resources, and develop performance measures to enable DOD to evaluate the results of the initiatives in allocating a greater proportion of the active force to warfighting positions. The Deputy Under Secretary of Defense (Program Integration) provided written comments on a draft of this report. The department generally agreed with our recommendations and cited actions it is taking to implement them. The department’s comments are reprinted in their entirety in appendix II. In addition, the department provided technical comments, which we incorporated as appropriate. The department agreed with our recommendation that the Secretary of Defense conduct an OSD-led, comprehensive assessment of the manning and balancing the force needed to execute the defense strategy as part of the next quadrennial review. In its comments, the department said that it intends to meet the requirements that Congress set forth in mandating the quadrennial review, including assessing the total force structure needed to implement the defense strategy. Further, the department agreed with our recommendation to report to Congress its conclusions about appropriate personnel levels for each of the services and describe the key assumptions guiding its analysis, the methodology used to evaluate the requirements, and how the risks associated with various alternative personnel force levels were evaluated. We believe that the department’s approach will satisfy the intent of our recommendation if, in the course of the quadrennial review, the manning and balancing of active and reserve forces are specifically analyzed using appropriate methodologies, assumptions, and evaluative techniques. Providing this information to Congress will assist in its oversight of defense programs by providing a sound basis for decision making on the size of the active and reserve forces and the costs and risks associated with various alternatives. The department also agreed with our recommendation to develop a detailed plan to manage implementation of its initiatives to use the force more efficiently and such a plan should establish implementation objectives and time frames, assign responsibility, identify resources, and develop performance measures to use the force more efficiently. In its comments, the department noted that it has put in place some mechanisms to capture and track activities that enable it to use the force more efficiently. For example, it is now tracking the services’ military-to-civilian conversions and documenting the results in its budget exhibits. The department also noted that it has established a forum for OSD and service officials to discuss and track the services’ initiatives to reduce stress on the force. We believe that the department’s approach represents positive steps. However, we believe that OSD should consider taking additional actions to clearly assign responsibility and develop comprehensive plans for implementing initiatives and measuring their progress. Critical elements of such plans would include identifying resources and developing mechanisms to measure outcomes and means to compare program results to performance goals. By embracing these management principles, OSD and service officials can help decision makers determine if initiatives are achieving their desired results and take corrective actions as needed. We are sending copies of this report to other appropriate congressional committees and the Secretary of Defense. We will also make copies available to other interested parties upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you have any questions about this report, please contact me at (202) 512-4402. Major contributors to this report at listed in appendix III. To assess the extent to which the Office of the Secretary of Defense (OSD) has conducted a data-based analysis of active military personnel needed to implement the national defense strategy, we identified and examined relevant laws, presidential documents, and the Department of Defense (DOD) guidance, reports, and analyses related to active military personnel and the defense strategy. These documents included the 2001 Quadrennial Defense Review Report, the defense strategy issued as part of the 2001 Quadrennial Defense Review Report, the National Military Strategy of the United States of America 2004, the Defense Manpower Requirements Report Fiscal Year 2005, the Secretary of Defense’s Annual Report to the President and the Congress for 2003, and the Department of Defense Performance and Accountability Report Fiscal Year 2004. Although the total force includes active military personnel and National Guard and reserve forces, our review focused on active military personnel because Congress considered and passed legislation in 2004 to increase their numbers. We examined the services’ guidance on processes for determining personnel requirements for the total force to identify the methodologies, time frames, and organizations involved in these processes. We also obtained and analyzed the results of the services’ requirements processes and studies, and we discussed their status with officials responsible for managing these efforts at DOD organizations. These organizations included, but are not limited to, the Army’s Offices of the Deputy Chiefs of Staff for Personnel and for Operations and Plans, and the Army’s Programs Analysis and Evaluation Directorate; the U.S. Army Forces Command; the Air Force’s Deputy Chiefs of Staff for Personnel and for Plans and Programs; the Air Force Manpower Agency; the Air Force Personnel Center; the Navy’s Deputy Chief of Naval Operation for Manpower and Personnel; the Marine Corps Manpower and Reserve Affairs; and the Marine Corps Combat Development Command. Because it did not fall within the scope of our review, we did not assess the services’ methodologies or validate the results of their requirements analyses. We also examined guidance on the services’ processes for allocating manpower resources. We identified criteria for examining workforce levels through our products on strategic human capital management and consulting with our staff with expertise in this area. Testimonial evidence was obtained from officials assigned to (1) the Offices of the Under Secretary of Defense for Policy, the Under Secretary of Defense for Personnel and Readiness, and the Under Secretary of Defense for Comptroller; (2) the Office of Program Analysis and Evaluation; and (3) the Joint Chiefs of Staff. In addition, we analyzed transcripts of public briefings and congressional testimony presented by OSD officials. We reviewed DOD’s fiscal year 2005 Defense Manpower Requirements Report and DOD’s fiscal years 2003 and 2004 Performance and Accountability Reports to determine the manner and extent to which OSD evaluates end strength in fulfilling statutory reporting requirements pertaining to force management. We reviewed our prior work on the 2001 Quadrennial Defense Review to ascertain whether this process included an explicit assessment of active duty personnel. To gain an independent perspective on the OSD role in analyzing the number of active military personnel needed to implement the national defense strategy, we interviewed a former Assistant Secretary of Defense. To assess the extent to which OSD has a plan to implement initiatives to make more efficient use of active military personnel and evaluate their results, we analyzed available internal DOD documentation such as briefings, memoranda, and reports that identified DOD’s plans and time frames. We obtained, analyzed, and compared OSD’s and the services’ fiscal year 2004 and 2005 targets to ascertain the differences. Also, we compared the services’ fiscal year 2004 results with the services’ fiscal year 2004 plans. To understand the reasons for any differences, we discussed their status and implications with officials responsible for managing these efforts at DOD organizations including, but not limited to, the Offices of the Under Secretaries of Defense for Policy, for Personnel and Readiness, and for Comptroller; the Army’s Offices of the Deputy Chiefs of Staff for Personnel and for Operations and Plans; the U.S. Army Forces Command; the Air Force’s Deputy Chiefs of Staff for Personnel; the Navy’s Deputy Chief of Naval Operations for Manpower and Personnel; and the Marine Corps’ Combat Development Command. We limited our review to the major initiatives that were identified by these officials and emphasized by the Secretary of Defense in his testimony because DOD officials did not have a comprehensive document that listed or described all the initiatives. We held further discussions with service officials and obtained and analyzed written responses to our questions to (1) identify the actions that the services took or will take in fiscal years 2004 and 2005 to implement the initiatives and (2) fully understand the challenges that the services may face with implementation. Also, we reviewed DOD’s fiscal year 2005 budget request to assess DOD’s expected costs for military-to-civilian conversions. To identify the extent to which DOD had implemented mandated reductions in major headquarters staff, we reviewed the governing directives and analyzed DOD’s draft budget exhibits for fiscal year 2004, which contained data on actual reductions for the services and other defense organizations for fiscal year 2003 and estimated reductions for fiscal years 2004 and 2005. We also interviewed an OSD official and obtained data from the services to identify whether they collected data on the extent to which military personnel affected by headquarters reductions were reassigned to warfighting forces. Moreover, we conducted interviews with officials from the Under Secretary of Defense for Personnel and Readiness, the Army, the Air Force, the Navy, and the Marine Corps and reviewed DOD guidance to understand DOD’s process for managing conversions of military positions to civilian or contractor positions. Further, we reviewed our prior audit work related to the conversion of military positions to civilian or contactor positions, the competitive sourcing process, and headquarters reductions to enhance our understanding of DOD’s ongoing efforts to achieve efficiencies. Our work was conducted in the Washington, D.C., metropolitan area; Atlanta, Georgia; and San Antonio, Texas. We performed our work from August 2003 through January 2005 in accordance with generally accepted government auditing standards and determined that data, other than those used for the efficiency initiatives, were sufficiently reliable to answer our objectives. We interviewed data sources about how they ensure their own data accuracy, and we reviewed their data collection methods, standard operating procedures, and other internal control measures. Concerning the fiscal year 2004 data for the military-to-civilian conversions and rebalancing initiatives, we determined that the Navy’s and the Marine Corps’ data were sufficiently reliable for the purposes of our objectives. We considered the Army’s and the Air Force’s military-to-civilian conversion and rebalancing skills data for fiscal year 2004 to be of undetermined reliability because the two services did not respond to our requests to provide documentation for the controls they use to ensure the accuracy of their data. Even though those services’ officials presented the fiscal year 2004 data as their official numbers, we cannot attest to their reliability. However, in the context in which the data were presented, we determined the usage to be appropriate because the fiscal year 2004 data did not constitute the sole support for our findings, conclusions, and recommendations. Further, if the fiscal year 2004 results were revised, our conclusions would remain unchanged. The following is GAO’s comment on the department’s letter dated January 12, 2005. 1. The department’s response correlates to the second part of our first recommendation. In addition to the persons named above, Deborah Colantonio, Susan K. Woodward, Paul Rades, Leland Cogliani, Michelle Munn, Margaret Best, Cheryl Weissman, Kenneth E. Patton, John G. Smale Jr., and Jennifer R. Popovic also made major contributions to this report.
Congress recently increased active military personnel levels for the Army and the Marine Corps. The Secretary of Defense has undertaken initiatives to use military personnel more efficiently such as rebalancing high-demand skills between active and reserve components. In view of concerns about active personnel, GAO reviewed the ways in which the Department of Defense (DOD) determines personnel requirements and is managing initiatives to assign a greater proportion of active personnel to warfigthing duties. GAO assessed the extent to which the Office of the Secretary of Defense (OSD) (1) has conducted a data-based analysis of active military personnel needed to implement the national defense strategy and (2) has a plan for making more efficient use of active military personnel and evaluating the plan's results. Our prior work has shown that valid and reliable data about the number of employees required to meet an agency's needs are critical because human capital shortfalls can threaten the agency's ability to perform missions efficiently and effectively. OSD provides policy and budget guidance on active personnel levels and has taken some steps toward rebalancing skills between active and reserve components, but it has not conducted a comprehensive, data-driven analysis to assess the number of active personnel needed to implement the defense strategy. A key reason why it has not conducted such a comprehensive analysis is that OSD has focused on limiting personnel costs in order to fund competing priorities, such as transformation. OSD conducts some analyses of active personnel, such as monitoring actual personnel levels, and the services have processes for allocating the active personnel they are authorized to key missions. However, OSD does not systematically review the services' processes to ensure that decisions about active personnel levels are linked to the defense strategy and provide required capabilities within acceptable risk. If OSD conducted a data-driven analysis that linked active personnel levels to strategy, it could more effectively demonstrate to Congress a sound basis for the active personnel levels it requests. The quadrennial review of the defense program planned for 2005 represents an opportunity for a systematic reevaluation of personnel levels to ensure that they are consistent with the defense strategy. Although OSD has identified some near- and long-term initiatives for assigning a greater proportion of active personnel to warfighting positions, it has not developed a comprehensive plan to implement them that assigns responsibility for implementation, identifies resources, and provides for evaluation of progress toward objectives. OSD officials told us a key reason why OSD does not have a plan to oversee its initiatives is that they have had to respond to other higher priorities. Sustained leadership and a plan for implementing initiatives and measuring progress can help decision makers determine if initiatives are achieving their desired results. Without such a plan, OSD cannot be sure that initiatives are being implemented in a timely manner and having the intended results. For example, the initiative to convert military positions to civilian or contractor performance is behind schedule. Specifically, OSD's goal was to convert 10,000 positions by the end of fiscal year 2004; however, the services estimate that they had converted only about 34 percent of these positions. By establishing performance metrics and collecting data to evaluate the results of its initiatives, OSD could better determine its progress in providing more active personnel for warfighting duties and inform Congress of its results.
In conducting our work, we interviewed headquarters officials at DHS, DOJ, the Department of the Interior (DOI), and USDA; analyzed DHS documentation; and conducted site visits to four northern border locations. We selected Border Patrol’s Blaine, Spokane, Detroit, and Swanton sectors to visit as they comprise a mix of differences along the northern border regarding geography (western, central, and eastern border areas), threats (terrorism, drug smuggling, and illegal migration), and threat environment (air, marine, land) as shown in figure 1. We conducted interviews with federal, state, local, tribal, and Canadian officials relevant to these Border Patrol sectors. Although other northern border partners do not divide their geographic areas of responsibility by sectors, for the purposes of this report, we refer to the northern border partners—such as ICE and DEA—whose area of responsibility overlaps with these sectors as officials operating within these Border Patrol sectors. While we cannot generalize our work from these visits to all locations along the northern border, the information we obtained provides examples of the way in which DHS and other federal agencies coordinate their efforts with these northern border partners. To address the first objective, we reviewed documents that included relevant legislation affecting the northern border, a past report to Congress in response to legislated requirements, and agency strategies, including the DHS Quadrennial Homeland Security Review, and CBP’s Northern Border Strategy. We interviewed DHS headquarters officials with knowledge of DHS coordination efforts and also interviewed federal, state, local, tribal, and Canadian officials in the four sectors we visited to obtain their perspective on DHS coordination efforts focusing on their participation in interagency forums and joint operations. For a complete list of northern border partners we interviewed in each sector, see appendix I. Based on these documents and discussions, we focused on two interagency forums—the Integrated Border Enforcement Team (IBET) and the Border Enforcement Security Task Force (BEST)—and joint operations such as the Shiprider Program. Our assessment of these interagency forums and joint operations are nongeneralizeable as they do not include an exhaustive list of U.S. and Canadian initiatives to coordinate the security of the border. However, they were highlighted by the officials we interviewed as interagency forums that helped to coordinate information sharing, interdiction, and investigations across nations and levels of government along with joint operations that coordinated a federal law enforcement response between the partners in the air, land, and marine border environments. In addition to these discussions within each sector, we reviewed documents at the sector level relevant to northern border coordination including meeting minutes from interagency meetings and after-action reports for joint operations. We compared DHS coordination efforts to best practices and federal guidelines for interagency coordination to determine whether DHS’s efforts are consistent with such practices. To address the second objective, we reviewed agreements established between DHS components, between DHS and DOJ, and between DHS and USDA to coordinate interdiction and investigation activities, and interviewed officials from these agencies at headquarters and in the field. Specifically, we reviewed agreements assigning responsibilities for interdiction and investigation between Border Patrol and ICE, Border Patrol and Forest Service, and ICE and DEA. We reviewed documents and reports documenting coordination challenges between these agencies, including those prepared by DHS and us, and subsequent corrective action cited by the departments. As part of our interviews with officials in the four sectors we visited, we examined the extent to which DHS and its partners stated that agreements were working to overcome coordination challenges between agencies and were enhancing the sharing of information and resources to secure the border. See appendix I for a list of offices interviewed in the four sectors. We also used work from our companion review of border coordination on federal lands, to assess Border Patrol coordination with DOI and the USDA in the Spokane Sector. To address the third objective, we analyzed Border Patrol’s 2007 through 2010 Operational Requirements Based Budget Process (ORBBP) documents that include each sector’s assessment of the border security threat, operational assessment of border security, and resource requirements needed to further secure border miles within each sector. We reviewed these documents to determine the number of border miles that Border Patrol reported were under effective control, the number of miles reported as needing outside law enforcement support, and the extent that use of partner resources were being used to address gaps in Border Patrol resources. We reviewed guidance headquarters provided to sectors for development of the ORBBP, as well as direction and performance indicators provided in CBP’s Northern Border Strategy. We also interviewed Border Patrol officials in the field who are responsible for preparing the ORBBP and headquarters officials responsible for reviewing these documents. We conducted this performance audit from December 2009 through December 2010 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. CBP has reported many threats on and vulnerabilities of the northern border related to illegal cross-border activity. Overall, according to CBP, a transportation infrastructure exists across much of the northern border that facilitates ease of access to, and egress from, the border area. CBP also reports that the maritime border on the Great Lakes and rivers is vulnerable to use of small vessels as a conduit for potential exploitation by terrorists, alien smuggling, trafficking of illicit drugs and other contraband and criminal activity. Also, the northern border’s waterways can freeze during the winter and can easily be crossed on foot or by vehicle or snowmobile. The northern air border is also vulnerable to low-flying aircraft that, for example, smuggle drugs by entering U.S. airspace from Canada. Additionally, CBP reports that further northern border threats result from the fact that the northern border is exploited by well-organized smuggling operations, which can potentially support the movement of terrorists and their weapons. Northern border security is the primary responsibility of three DHS components—CBP, ICE, and the U.S. Coast Guard (USCG)—which reported spending more than $2.9 billion in efforts to secure the northern border in 2010. Table 1 shows the roles and responsibilities of DHS components regarding northern border security. CBP and ICE have several partners that are also involved in northern border security efforts. These partners include other U.S. federal agencies such as DOJ’s DEA, which has responsibility for drug enforcement, and the Federal Bureau of Investigation (FBI), which has responsibility for combating terrorism. The Department of Defense (DOD), while not a partner, also provides support as requested, such as personnel and technology for temporary joint operations. Partners also include Canadian law enforcement agencies such as the Royal Canadian Mounted Police (RCMP)—which is responsible for national law enforcement, including border security—and the Canada Border Services Agency (CBSA), which is responsible for border security and public safety at the ports of entry. CBP and ICE also partner with federal, state, local, and tribal entities that have law enforcement jurisdiction for federal, public, private, or tribal lands that are adjacent to the border. As shown in figure 2, federal lands comprise about 1,016 miles, or approximately 25 percent, of the nearly 4,000 northern border miles (excluding the Alaska–Canada border), and are primarily administered by the National Park Service and Forest Service. Law enforcement personnel from sovereign Indian nations located on about 4 percent of the northern border also conduct law enforcement operations related to border security. In addition, DOI’s Bureau of Indian Affairs may enforce federal laws on Indian lands, with the consent of tribes and in accordance with tribal laws. Moreover, numerous state and local law enforcement entities interdict and investigate criminal activity on public and private lands adjacent to about 75 percent of the northern border. Although these agencies are not responsible for preventing the illegal entry of aliens into the United States, they do employ law enforcement officers and investigators to protect the public and natural resources on their lands. Overlap exists in mission and operational boundaries among agencies at the border that require coordination and collaboration for efficient and effective law enforcement. One reason for overlap is Border Patrol’s multilayered strategy for securing the border, which provides for several layers of agents who operate not only at the border, but also on public and private lands up to 100 miles from the border. As a result, officials from other federal, state, local, and tribal law enforcement agencies may patrol in the same geographic area and pursue the same persons or criminal organizations who violate laws underpinning each agency’s respective mission. Another reason for overlap is that agencies have separate responsibility for investigating crimes that are conducted by the same criminals or organizations. Federal legislation and DHS policy have stressed the need for coordination between DHS components and across other federal agencies and partners to most efficiently and effectively secure the homeland and its borders. The 9/11 Commission had determined that limited coordination had contributed to border security vulnerabilities. In addition, coordination challenges were also addressed in several GAO and DHS reports. For example, in both 2004 and 2010, we reported that Border Patrol, USDA, and DOI were challenged to coordinate border security efforts on northern federal lands. We also reported in early 2009 that there were significant challenges to coordination of drug law enforcement efforts between ICE and DEA. In addition, the DHS Inspector General issued reports on coordination challenges between Border Patrol and ICE in 2005 and 2007, citing the shortfalls in information sharing and operational coordination that have led to competition, interference, and operational inflexibility. The Implementing Recommendations of the 9/11 Commission Act required the Secretary of Homeland Security to report to Congress on ongoing initiatives to improve security along the northern border as well as recommendations to address vulnerabilities along the northern border. As DHS reported in response to this requirement, the agency has taken action to establish or support interagency forums and joint operations among agencies to strengthen information sharing and coordinate efforts to secure the border. DHS reiterated its commitment to share information across agencies in its 2008 Information Sharing Strategy, which provides full recognition and integration of federal agencies, tribal nations, and others in the DHS information-sharing environment and in development of relevant technologies. Also, in its 2008 Report to Congress on the status of northern border security, DHS listed interagency forums and joint operations that it established or supports for coordinating efforts among federal, state, local, tribal, and Canadian partners. DHS, along with its federal partners, also issued updates and addendums to long-standing memorandums of agreement (MOA) or understanding (MOU) between its components and across federal agencies on respective roles and responsibilities to enhance coordination. Most recently, DHS outlined its vision for coordination among agencies and partners for a united homeland security enterprise in its Quadrennial Homeland Security Review Report (QHSR), submitted to Congress in February 2010. Cited as a strategic framework for homeland security, the QHSR is to guide the activities of participants in homeland security toward a common end. In this regard, it emphasizes a need for joint actions and efforts across previously discrete elements of government and society including federal, state, local, tribal, and international entities, among others, to achieve core homeland security mission areas, including securing and managing the borders by effectively controlling U.S. air, land, and sea domains, and safeguarding lawful trade and travel, and disrupting and dismantling transnational criminal organizations. The efforts supporting the QHSR include a review to identify mission overlap among components. In accordance with the QHSR vision, DHS is also developing a northern border strategic plan to clarify roles and responsibilities among all law enforcement partners. According to DHS officials, the strategic plan is in its final stages of review but time frames for completion have not been solidified. DHS has established performance goals and measures for border control. The CBP performance measure for effective border control is defined as the number of border miles where Border Patrol has reasonable assurance that illegal entries are detected, identified, and classified, and Border Patrol has the ability to respond and bring these incidents to a satisfactory law enforcement resolution. DHS reports this performance goal and measure for border security to the public and to Congress in the DHS Annual Performance Report. DHS used interagency forums and joint operations to improve federal coordination of northern border security efforts with law enforcement partners from state, local, and tribal governments, and Canada according to officials we interviewed across four northern border sectors. However, numerous partners cited challenges related to the inability to resource the increasing number of interagency forums in their area and raised concerns that some efforts were overlapping. DHS oversight of interagency forums established by its components across locations may help address these challenges and ensure continued benefit of DHS efforts to increase the national capacity of its partners to secure the northern border. Interagency forums improved coordination of border intelligence information, resources, and operations between U.S. federal agencies and their law enforcement partners in Canada, according to the majority of the representatives of these entities we interviewed across four northern border sectors. The 9/11 Commission had determined that limited coordination had contributed to border security vulnerabilities, and emphasized the importance of establishing or supporting interagency forums to strengthen information sharing and coordinate efforts to secure the border. Two DHS components, CBP and ICE, responsible for border security interdiction and investigations, respectively, played key roles in the establishment of the two interagency forums within our review—the IBET and BEST—and along with USCG and Canada’s RCMP and CBSA are key participants in both forums. Information about these interagency forums is presented in table 2. DHS is working to establish a means to quantify and report on the benefits achieved through its investment in interagency forums, but in the meantime officials from 17 offices that participate in interagency forums across the four sectors we visited commented that interagency forums had improved coordination among the participants. These officials provided examples that highlighted benefits in three key areas: (1) facilitating the sharing of border security intelligence information, (2) facilitating the sharing of resources such as equipment and personnel; and in some cases, (3) serving as a tool for deconfliction—or a means to inform partners of special border security operations that were planned to be conducted in geographic areas of responsibility common to multiple law enforcement agencies. Information Sharing. The IBET or the BEST facilitated exchange of timely and actionable threat information between U.S. and Canadian partners leading to improved interdiction and investigation capabilities, according to officials from 17 of the 18 offices we interviewed. For example, IBET participation helped to build trust between the core partners, which resulted in collaborative efforts to secure the border, according to Canadian CBSA officials from Windsor and Montreal—north of the Detroit and Swanton sectors respectively. In addition, IBET membership further strengthened U.S. and Canadian relationships as participants interacted more frequently through meetings and the colocation of personnel, which in turn facilitated the exchange of information according to ICE and Border Patrol officials operating within the Swanton and Detroit sectors, respectively. As a result, we were told IBET partners can more easily and quickly obtain information, such as border entry and exit data and surveillance images that would normally take several weeks to obtain. For example, Canada’s CBSA forwarded intelligence on a Canadian national who was smuggling drugs from Canada to the United States to the BEST, according to ICE officials operating within the Blaine sector, at which time BEST partners—Border Patrol and ICE—were able to conduct surveillance and apprehend the individual, seizing over 500 pounds of marijuana that was backpacked across the border, and gain further intelligence about other criminal activity. Sharing of Resources. The IBET or BEST helped partners leverage personnel, technology, and other resources for operations to interdict or investigate cross-border illegal activity, according to officials in 17 of the 18 offices we interviewed. For example, colocation of BEST members provides U.S. and Canadian officials ready access to the knowledge and skills of participating agencies, according to ICE officials operating within the Detroit sector. Another example of a benefit is the pooling of resources. The IBET operating within the Spokane sector maintains a centralized resource list that participants can view and request use of partners’ available technology, equipment, and vehicles, according to ICE and Border Patrol officials. Radio communications are also facilitated among participants of the IBET and BEST attended by officials from the Blaine sector, in that all participants have access to a bank of 10 hand-held radios on the same frequency, according to an ICE official operating within the Blaine sector. Officials cited examples of how sharing personnel and resources helped secure the border. In one example, U.S. and Canadian IBET partners were conducting joint operations to monitor over 133 kilometers of unguarded roads in the Swanton sector that were exploited by criminal organizations smuggling humans, drugs, and other contraband, according to RCMP officials north of the Swanton sector. The operation employed Canadian personnel from RCMP and Border Patrol to patrol the roads using resources such as motion sensor and video equipment to expand surveillance coverage. U.S. and Canadian IBET partners also shared sensor hits and video footage from both sides of the border. As a result of the shared information and resources, partners were able to determine if illegal activities were going north or south of the border and had increased awareness to detect and interdict cross-border crime. Deconfliction. The IBET or BEST were also used in conjunction with other interagency forums to deconflict operations planned by various agencies that operate in geographic areas of responsibility common to multiple law enforcement agencies, according to officials in all offices we interviewed. For example, the colocation of BEST members raised awareness of operations and activities at the border due to the daily and ongoing information being shared between members, according to ICE participants in the Blaine and Detroit sectors. IBET participants from the Spokane sector also had daily telephone conversations to discuss their operations, and subgroups within the IBET met once a week to share information and intelligence and discuss operations to prevent unknowingly interrupting each other, according to a Border Patrol participant. DHS components also used joint operations as a means to integrate federal border security efforts with northern border partners from state and local governments, tribal nations, and Canada. The 9/11 Commission stressed the importance of extensive collaboration with international partners as well as increasing interaction between federal, state, and local law enforcement through joint efforts that would combine intelligence, manpower, and operations to address national security vulnerabilities. Individually, partners had insufficient authority, staff, or assets to conduct certain types of operations, according to Border Patrol officials in the Detroit sector, and joint operations allowed partners to leverage these resources to address existing border security vulnerabilities. For example, to address vulnerabilities related to different law enforcement authority across the border, the United States and Canada established binational agreements that allowed USCG and RCMP law enforcement personnel under the Shiprider Program to conduct joint vessel patrols in the Blaine and Detroit sectors that leveraged both U.S. and Canadian authority across the maritime border. To address vulnerabilities related to insufficient staff and resources, DHS issued 3-year grants to tribal nations and state and local governments under Operation Stonegarden to augment Border Patrol personnel and resources for patrolling the land border, which benefited all four sectors we visited. DHS components also developed joint operations for conducting time-limited surge operations for interdiction or investigations in the air, maritime, or land border environments, including Operations Channel Watch, Outlook, and Frozen Timber. DHS tracked the resulting benefits of these joint operations in their after- action reports as reflected in table 3, and all officials from 20 offices who participated in one or more of these operations across the four sectors we visited agreed that joint operations made important contributions to border security. These contributions included an enhanced ability under Operation Outlook to detect cross-border illegal activity and to inform future asset deployments in Spokane sector, a show of force under Operation Channel Watch to deter illegal cross-border activity in Detroit sector, and across all operations the arrest of smugglers and other criminals crossing the border, or seizures of narcotics, cigarettes, currency, and other contraband. Officials in 5 of the 20 offices raised concerns that, while surge operations provided short-term benefits, they may not provide an ongoing deterrent effect or address long-standing border security vulnerabilities. For example, Border Patrol officials in the Spokane sector said that while Operation Frozen Timber was a successful joint operation that resulted in significant arrests and drug seizures, it was not an ongoing effort, and in their opinion, should be expanded to a more comprehensive concept of operations to combat and deter cross-border smuggling by air. Likewise, ICE officials operating within the Detroit sector stated that Operation Channel Watch demonstrated a show of force on the Great Lakes, but it was not clear whether conducting this joint operation six weekends a year would deter sophisticated criminal organizations. Despite these concerns, after-action reports showed that these time-limited joint operations had provided some lasting benefits. Operation Outlook, for example, resulted in information about the continuous and significant threat of cross-border smuggling in the air environment in the Spokane sector, and pointed out weaknesses that could be corrected in the placement and use of air and ground assets. Most northern border partners we interviewed across the four sectors cited challenges to resourcing the increasing number of interagency forums being established in their geographic area of responsibility. An interagency working group convened in 2009 to study the interaction between the IBET and BEST also raised concerns that the increasing demand to participate in interagency forums created difficulties in gathering the resources necessary to participate in the IBET or BEST. Overall, officials in 21 of the 30 Canadian, U.S. federal, state, and local offices across the four sectors we visited said that it was difficult to resource the IBET and BEST, in addition to other interagency forums in their geographic area. A CBSA official north of the Swanton sector stated that the office must balance resources among the three IBET offices within its area of responsibility and that it could not afford to staff a BEST office with current resources if one were to open in the area. ICE officials operating within the Swanton sector stated that there are two IBETs in their area of responsibility, and while they only have resources to staff the closest one to their office, they would like to staff the IBET further away as it is close to a port of entry and has more law enforcement partners that can further the ICE mission. Local law enforcement in the Swanton sector, Rouses Point Police Department, reported that the high level of commitment required by forums such as the IBET make it difficult for resource-strapped smaller law enforcement agencies such as their own to participate. Officials from seven of the nine remaining offices without these concerns included Border Patrol in the Blaine, Detroit, and Swanton sectors and ICE operating within the Blaine sector, who said they had sufficient resources, and local law enforcement in the Detroit sector who said they would not assign staff to a forum unless it was the most efficient use of the officer’s time. In addition, an FBI official operating within the Spokane sector and an official from the Michigan State Police said that while the number of forums has increased since 9/11, only those that provide the most value through focused meetings and attract the most participants will continue to exist. Of the officials within the 13 offices operating within the Blaine and Detroit sectors who were named as key members of the IBET or BEST, more than half cited concerns about mission overlap between the IBET and BEST that could result in duplication of effort, a concern also expressed by the DHS Inspector General in a 2007 report, and members of the IBET/BEST Working Group. ICE headquarters officials stated that although there are not distinct geographic boundaries of operation for the IBET and BEST, ICE is addressing concerns of overlapping operations by developing a strategic plan to lay out the concept of operations, administrative policies and procedures, and the goals of the BEST. At the time of our review, ICE had not yet established a time frame for completion of these efforts as they are in early stages of drafting the plan. In the meantime, however, officials in 7 of the 13 offices in these forums located in the Blaine and Detroit sectors were concerned that some BEST activities to investigate and interdict cross-border illegal activity at the ports of entry duplicated IBET efforts to conduct these same activities between the ports of entry. Border Patrol officials in the Blaine sector said that despite good working relationships between the IBET and BEST, concerns remain about overlapping cases because of the ability for cases at the ports of entry to expand into areas between the ports of entry. Likewise, ICE officials operating within the Blaine sector agreed that BEST investigative activity between the ports of entry would be duplicative of the IBET mission, but disagreed that such overlap had occurred. RCMP officials north of the Detroit sector reported that there is a perception of duplication because the BEST in Detroit is expanding its scope to include investigations between the ports of entry, which is the domain of the IBET. ICE officials operating within the Detroit sector said they disagreed with the assumption that a geographic dividing line could be drawn in conducting investigations. Border Patrol and DEA officials operating within the Detroit sector said that the reason for establishing the BEST in their area was unclear. In addition, Border Patrol officials stated that the IBET serves as their primary forum for targeting cross-border crime. However, ICE said that while the BEST in Detroit is a new effort, started in 2009, it provided them with better support to meet the needs of their mission. This support was provided through partnerships and colocation with federal and state and local law enforcement that are not core members of the IBET including the Bureau of Alcohol, Tobacco, Firearms and Explosives and the local police department. While DHS headquarters officials report that policies governing DHS’s coordination efforts are under development, DHS does not currently provide guidance or oversight to its components to establish or assess the results of interagency forums across northern border locations, according to officials from the DHS Office of Strategic Plans. We previously reported that federal agencies can enhance and sustain their collaborative efforts by, in part, developing mechanisms to monitor their results. DHS and DOJ have developed guidance and provided oversight to help prevent overlap among interagency forums established under state and local fusion center programs, to leverage fusion centers that already exist, and to reduce the downstream burden on state and local partners that have limited resources. However, DHS officials from the Office of Strategic Plans said that coordination policies are still in development and that many organizations within DHS share responsibility for ensuring that component operations strategically align with the Secretary’s goals and commitment for efficient operation and integration of partner efforts for the homeland security mission. These officials stated that headquarters organizations, including the Management Directorate, the Office of Policy, and the Office of Operations Coordination and Planning, are developing processes to provide department-level coordination and oversight of those forums; however, DHS has not provided documentation to support its plans, thus the scope and the time frames for finalizing this effort are unclear. Ongoing DHS oversight of the mission and location of interagency forums established by its components could help prevent duplication of efforts, and help ensure that DHS is a mindful steward in conserving the scarce resources of northern border partners. Moreover, this oversight role could provide opportunities for DHS to determine whether additional forums are necessary or whether existing forums can be modified to address emerging needs. Federal agency coordination to secure the northern border was reported to have improved by some Border Patrol, ICE, Forest Service, and DEA officials operating within the four sectors we visited; however, in all sectors officials cited problems with others in sharing information and resources for daily operations. DHS attention to resolving these long- standing coordination challenges could enhance its ability to implement its strategic vision for a coordinated homeland security enterprise and improve the federal capacity to secure the northern border. Border Patrol officials in three of the four sectors we visited cited strong or improved coordination with ICE in sharing information and coordinating their border security missions, but ICE officials in all but one sector reported that coordination with Border Patrol remained challenging. CBP and ICE had developed an MOU between Border Patrol and ICE in 2004, updated in 2007, to establish and coordinate roles and responsibilities for interdiction and investigation missions on the border, and as a mechanism to resolve conflict or disagreements. The 2007 MOU requires the two agencies to establish a seamless, real-time operational partnership, with Border Patrol taking the lead on all border-related interdiction activities, and ICE taking the lead on investigations. Coordination between Border Patrol and ICE was cited as strong or greatly improved by Border Patrol officials in two sectors, and ICE officials in one sector, who cited different reasons for the improvements in coordination. For example, Border Patrol officials in the Spokane sector said that there was considerable improvement in their relationship with ICE since the MOU was established in 2004, and attributed improved coordination to sector leadership, open lines of communication, and personal friendships between agents. ICE officials operating within the Detroit sector said that their relationship with Border Patrol had matured, and they generally worked well to support each other’s mission. They cited that improved coordination resulted from colocation of Border Patrol agents in the BEST and the close relationships of sector leaders who supported coordination between the components. However, coordination to exchange information and integrate missions remained challenging according to ICE officials in all four sectors, and Border Patrol officials in two sectors, with all citing problems with the MOU, among other issues. These officials said that the MOU had not been effective in clarifying roles and responsibilities or resolving disagreements about the dividing line between interdiction and investigation. These disagreements surrounded the interpretation and separation of “intelligence-gathering” activities to support Border Patrol’s interdiction mission and “investigative” activities that fall under the purview of ICE, as well as the timing and circumstance surrounding when Border Patrol should call ICE for investigative support, as shown by the following examples. Border Patrol and ICE officials said that the agencies continue to disagree on whether it is appropriate for Border Patrol agents to interview persons they apprehend. ICE officials state that Border Patrol should call ICE first. However, Border Patrol officials stated that postarrest interviews are within the intelligence-gathering provisions of the interagency MOU. Border Patrol and ICE officials continue to disagree on whether border surveillance falls under ICE’s investigative role. Border Patrol officials in the Spokane sector provided an example of ICE officials conducting surveillance of the border, which is the responsibility of Border Patrol under the MOU; however, ICE officials in all four sectors maintained that these intelligence gathering activities were an inherent part of the ICE investigative role. Border Patrol and ICE officials said that there is disagreement on when Border Patrol is required to call ICE to inspect seized contraband. For example, ICE officials operating within the Detroit sector interpreted the MOU as requiring Border Patrol to notify ICE of the contraband at the arrest site to inform investigations. However, Border Patrol officials in the Detroit sector interpreted the MOU as allowing agents to transport the contraband to the station for identification and calling ICE once established that it could develop into an investigation. While Border Patrol officials in the Spokane sector stated that evidence gathering is an inherent function of their role under the MOU, ICE officials in the Spokane sector viewed this practice as inappropriate handling and processing of evidence that hindered ICE’s investigations. Border Patrol officials in three sectors and ICE officials operating within two sectors stated that competition for performance statistics was another barrier to overcoming coordination challenges as these statistics are the basis for DHS resource allocation decisions. As a result, both Border Patrol and ICE officials said that agents sometimes worked outside of their established roles and responsibilities to boost performance statistics, and disagreed on which component should receive credit for apprehensions, seizures, and prosecutions. DHS has plans to revise its performance measures and processes for resource allocation across components; however, our discussions with DHS officials have shown that it will be difficult to ensure these revisions do not exacerbate current challenges to collaboration in support of the QHSR. For example, officials from the DHS Office of Strategic Plans said that the department is developing new performance measures for border security that may require each component to show how their efforts linked with the efforts of others to secure the border, and that resources would be distributed across the components according to their relative success. The coordination challenges between Border Patrol and ICE resulted in a lack of information sharing and potential inefficiencies, according to Border Patrol and ICE officials operating within three of the sectors we visited. Specifically, ICE officials operating within the Detroit, Spokane, and Swanton sectors said they are reluctant to share intelligence information with Border Patrol because they are concerned Border Patrol may adversely affect an ICE investigation. Border Patrol officials in the Detroit sector said that because they do not believe ICE shares information with them, coordination with ICE is hindered. Additionally, these Border Patrol officials stated that, from their perspective, the lack of information sharing between the agencies resulted in inefficient border security efforts. Similarly, the Border Patrol officials in the Blaine sector reported that the lack of information sharing resulted in inefficiencies as Border Patrol has used its resources to respond to potential cross-border criminals who were ICE agents engaged in undercover investigations. These coordination problems between Border Patrol and ICE have been long-standing and the subject of several studies and reports. We reported in 2005 that the effectiveness of ICE’s antismuggling strategy would depend partly on the clarification of ICE and CBP roles in antismuggling activities. In 2006, the Congressional Research Service reported, after interviewing agents in Los Angeles and San Diego, that ICE and CBP had problems with communications that compromised some smuggling investigations. In both 2005 and 2007, the DHS Office of Inspector General (OIG) reported on the coordination challenges between CBP and ICE, including those challenges between Border Patrol and ICE’s Homeland Security Investigations. The 2005 report concluded that shortfalls in operational coordination and information sharing had fostered an environment of uncertainty and mistrust between CBP and ICE personnel in the field, and instead of collegial interaction, field officials reported competition, and at times, interference. In its 2007 update, the OIG reported improvement, but additional work was necessary to address remaining challenges related to improving intelligence and information sharing, strengthening performance measures, and addressing ongoing relational issues. DHS took several actions in response to past findings, but our work for this review showed that ongoing coordination challenges continue to exist between DHS components. For example, CBP and ICE issued an addendum to strengthen the MOU between CBP and ICE, and established an ICE-CBP Coordination Council to ensure, among other things, that component policies and procedures supported the roles and responsibilities outlined in the MOU and were communicated and implemented in the field. DHS concurred with its OIG’s recommendation to establish joint CBP-ICE bodies to oversee the implementation of the MOU’s provisions but did not establish such an oversight body, stating that the establishment of the Coordination Council and other working groups would coordinate interagency efforts. The Coordination Council has since been disbanded, and the DHS officials from the Office of Intelligence and Analysis and the Office of Operations Coordination and Planning were unfamiliar with the council and could not provide an explanation for why it was discontinued. DHS continues to lack an entity to oversee the implementation of the MOU because the agency relies on CBP and ICE leaders to hold the field accountable for implementation of established agreements. Additionally, according to DHS’s Office of Intelligence and Analysis officials, components often leave coordination challenges for field leadership to resolve without adequate guidance from headquarters. DHS component field officials, DHS headquarters officials, and the DHS OIG acknowledged that there remains a disconnect between headquarters policy and field implementation that may require DHS-level oversight to correct. For example, Border Patrol and ICE officials in two of the sectors we visited said that DHS action, as a higher authority, could help mitigate different priorities between its components, provide a unifying direction, and quickly address problems. DHS headquarters officials from several offices agreed, stating that many DHS components do not consistently enforce information-sharing practices contained in interagency agreements, and that field agents are left to resolve coordination challenges without adequate headquarters guidance. According to the OIG official we interviewed, DHS oversight of its interagency MOUs could help promote the “One-DHS” culture. Although DHS has relied on component-level management to ensure that components coordinate information and operations in the field, the long- standing and continuing coordination challenges between ICE and Border Patrol highlight the importance of developing a permanent solution to oversee and address these challenges. We previously reported that federal agencies engaged in collaborative efforts need to create the means to monitor and evaluate their efforts to enable them to identify areas for improvement. DHS oversight of MOU implementation, including evaluating the outstanding challenges and developing planned corrective actions, could better ensure that the MOUs are facilitating coordination as intended, and that components are held accountable for adherence to provisions within established agreements. Border Patrol, ICE, Forest Service, and DEA officials reported ongoing coordination challenges in the four sectors we visited, despite DHS action to improve coordination between these federal agencies that have overlapping missions or operational boundaries. Additional DHS action to provide oversight and enforce compliance with established agreements across federal agencies could help further QHSR priorities of unity of effort and integrated operations in conducting interdiction and investigation on northern borderlands. Border Patrol and Forest Service officials we interviewed in the Blaine and Spokane sectors reported efforts to improve coordination among these agencies, but that sharing information on border security intelligence and operations remained problematic. An interagency agreement coordinating the missions of these agencies was established in a 2006 MOU among DHS, DOI, and USDA. The MOU outlines respective roles and responsibilities of each agency when operating on federal lands, providing Border Patrol’s role to detect and apprehend illegal cross-border activity, and Forest Service’s role to apprehend and investigate persons conducting illegal activities on federal lands. The agreement also requires the agencies and their component offices—including Border Patrol and Forest Service—to coordinate efforts in a number of areas, including sharing information about threats and operations. In the Blaine sector, Forest Service officials reported that coordination was lacking due to limited interaction and inattention by leadership. Although the interagency agreement establishes that the agencies are to prioritize coordination, little coordination was taking place and there was not an established relationship between the agencies in the Blaine sector, according to the officials we interviewed. Border Patrol disagreed and stated that it had assigned a Public Lands Liaison to coordinate operations on federal lands, but Forest Service officials said that contact had been minimal, due in part to turnover. While Forest Service officials were hopeful that coordination could occur through the Border Lands Management Task Force, they were not receiving information about the location of Border Patrol assets or operations on Forest Service lands. In the Spokane sector, officials reported that coordination was strained by disagreements on roles and responsibilities when operating on Forest Service land. For example, Forest Service law enforcement officials stated that surveillance, patrol, and investigation of potential cross-border criminal activity on federal borderlands are an inherent part of Forest Service’s mission to safeguard natural resources and public safety. However, Border Patrol officials stated that Forest Service actions to use sensors and other resources to monitor cross-border activity have led to duplication and overlap with Border Patrol’s mission and operation at the border. While Border Patrol and Forest Service issued a local MOU in 2008 that more specifically defined roles and responsibilities between the two agencies for the Spokane sector Border Patrol and the Northern Region Forest Service Office, agency officials in the Spokane sector continued to disagree on the division of roles and responsibilities when cross-border illegal activity moves past the border and onto Forest Service land. Another local-level MOU was issued in 2009 to more specifically address roles and responsibilities between the agencies on Forest Service lands patrolled by three Border Patrol stations, but challenges continue in coordinating border security intelligence and operations between these agencies. The coordination challenges between Forest Service and Border Patrol resulted in a lack of information sharing, inefficiencies, and within the Spokane sector an overall breakdown of coordination efforts, according to Forest Service officials operating within the Blaine and Spokane sectors and Border Patrol officials operating within the Spokane sector. According to Forest Service law enforcement officials operating within the Blaine sector, Border Patrol does not share information in a timely manner due to concerns that Forest Service cannot be trusted with certain types of information. Border Patrol officials in the Spokane sector cited similar concerns, and said that Forest Service leadership is reluctant to share information with Border Patrol. However, Forest Service officials operating in the Spokane sector disagreed stating that they are willing to share information with Border Patrol. Officials from both agencies agreed that these challenges may result in inefficiencies and a breakdown of coordination, ultimately leading to the risk of a border that is less secure. DHS action was needed to resolve these coordination challenges between the agencies, according to Border Patrol officials in the Spokane sector. Within the Spokane sector, Forest Service officials stated that DHS headquarters action has not resulted in cooperation or substantive change in field locations, and we recently reported that action was needed by DHS and USDA to ensure that established agreements were proactively implemented to prevent coordination challenges. Specifically, we recommended that, in part, DHS and USDA take the necessary action to ensure that personnel at all levels of each agency conduct early and continued consultations to implement provisions of the 2006 MOU, including determining agencies’ information needs for intelligence. Both DHS and USDA agreed with our recommendation and, while CBP stated that it would issue a memorandum to all Border Patrol sectors emphasizing the importance of its partnerships, as of October 2010, additional steps to fully address this recommendation have not yet been taken. ICE and DEA faced ongoing challenges coordinating northern border security investigations, according to ICE and DEA officials in all four sectors. Agreements coordinating the investigative missions of these agencies include a 1994 MOU between the U.S. Customs Service—a DHS legacy agency—and DEA. ICE and DEA updated this MOU in the June 2009 interagency cooperation agreement to reflect the current organization under DHS, and also to harness both agencies’ expertise and avoid operational conflicts in order to most effectively dismantle and disrupt trafficking organizations. Although the interagency agreement establishes that the agencies are to improve information and deconfliction efforts, the MOU had not yet resulted in improved coordination between the agencies 1 year after the updated agreement was in place, according to ICE officials operating in three sectors we visited, and DEA officials operating in all four sectors. The coordination challenges between ICE and DEA resulted in a lack of information sharing, or potential inefficiencies, resulting in the risk of investigations that were delayed or hindered, according to ICE and DEA officials operating within the four sectors we visited. DEA officials we interviewed in all four sectors attributed the coordination challenges with ICE to different interpretations of the MOU provisions related to jurisdiction for drug investigations. Although DEA has full jurisdiction for domestic and foreign drug investigations, as a result of separate interagency agreements DEA takes the lead on drug investigations originating between the ports of entry while ICE takes the lead on drug investigations originating at the ports of entry. These geographic distinctions can be confusing, according to a DEA official operating in the Blaine sector. By contrast, ICE officials operating in the four sectors we visited did not have concerns about differing interpretations of the roles and responsibilities laid out in the agreement. Specifically, ICE officials in the Spokane sector stated that both agencies are investigative so they interpret the roles and responsibilities similarly. ICE officials we interviewed in all four sectors attributed the coordination challenges with DEA to separate DEA agreements with Border Patrol and Canada’s RCMP that, from ICE’s perspective, exclude ICE from exchanges of intelligence information and operations that could benefit ICE investigations. According to ICE officials, under the DEA agreement with RCMP, ICE is excluded from efforts to coordinate international drug smuggling investigations. Similarly, ICE officials said that per the DEA agreement with Border Patrol, Border Patrol provides DEA instead of ICE the right of first refusal in referrals of drug seizures. ICE officials stated this MOU creates a strain on ICE’s relationships with Border Patrol and DEA, and also causes confusion that can hinder investigations and create inefficiencies. DEA headquarters officials disagreed that ICE is excluded as ICE has access to mechanisms DEA uses to share information with law enforcement partners, such as the Special Operations Division and the Organized Crime Drug Enforcement Task Force. ICE and DEA officials operating within three sectors also attributed the ongoing coordination challenges between the agencies to overlapping missions and competition for leading investigations, as both agencies have a mission to disrupt and dismantle criminal organizations that smuggle drugs as well as other contraband across the border. A DEA official operating within the Swanton sector stated that mission overlap creates too much competition for the same work, as well as receiving credit for that work. DEA officials operating within the Spokane sector agreed, stating that competition is an inherent problem when multiple investigative agencies exist because their budgets are tied to the seizure and investigation statistics they generate. Additional DHS action is needed to resolve coordination challenges between ICE and DEA, according to ICE officials we interviewed in all four sectors and DEA officials in two sectors we visited, and as recommended in our previous report. DEA officials operating within the Spokane sector said that oversight of established agreements was necessary to ensure that they are implemented and work to facilitate coordination. According to DEA officials in the Spokane sector, this oversight should consist of an overarching authoritative body—with no ties, affiliations, or bias toward a particular agency or political party— tasked with reviewing established MOUs between law enforcement entities to determine when coordination is being facilitated or hindered. We previously reported that federal agencies can enhance and sustain their collaborative efforts by, in part, developing mechanisms to monitor their results. In addition, we recommended in March 2009 that DOJ and DHS take action to provide oversight of established interagency agreements. We also recommended that the agencies develop processes to periodically monitor implementation of the agreements and make any needed adjustments. DOJ concurred with the recommendations, but DHS did not concur to monitor implementation of the agreements, and to-date this recommendation remains unaddressed. DEA and ICE signed a revised MOU in June 2009, but according to our work conducted in August 2010, the MOU had not yet resulted in resolution of coordination challenges in the four sectors we visited. DEA officials at headquarters commented that, while the 2009 agreement is entering its evaluation period, not enough time has elapsed since the signing of the agreement to assess its effectiveness. The challenges we have identified with northern border coordination between DHS and its federal partners underscore the importance of implementing past recommendations to ensure oversight that reinforces accountability when establishing a partnership through a written agreement. DHS reported limited progress in securing the northern border, but processes Border Patrol used to assess border security and resource requirements did not include the extent that northern border partnerships and resources were available or used to address border security vulnerabilities. DHS action to develop guidance and policy for including partner contributions in these processes could provide the agency and Congress with more complete information in making funding and resource allocation decisions. Few northern border miles had reached an acceptable level of security as of fiscal year end 2010, according to Border Patrol security assessments. CBP measures border security between the ports of entry by the number of miles under effective control of Border Patrol. DHS reports these results in its annual performance report to Congress and the public, based on border security assessments conducted by each Border Patrol sector that are included in each sector’s ORBBP. Our review of these reports for 2010 showed that for the northern border overall, 32 of the nearly 4,000 border miles had reached an acceptable level of control, with 9 of these miles included in the four sectors we visited. The remaining miles were assessed at levels that Border Patrol reported are not acceptable end states. These border miles are defined as vulnerable to exploitation due to issues related to accessibility and resource availability and, as a result, there is a high degree of reliance on law enforcement support from outside the border zone. CBP also does not have the ability to detect illegal activity across most of the northern border. Because most areas of the northern border are remote and inaccessible by traditional patrol methods, CBP’s Northern Border Strategy states that one of the goals of Border Patrol is to reach full situational awareness along the northern border. This strategy defines full situational awareness as an area where the probability of detection is high; however, the ability to respond is defined by accessibility to the area or availability of resources, or both. At this level, CBP states that partnerships with other law enforcement agencies play an important role in resolving the illegal entries. Our review of sector ORBBP documents for fiscal year 2010 showed that for the northern border overall, about 1,007 of the nearly 4,000 northern border miles had reached this definition of full situational awareness, with 398 of these miles included in the four sectors we visited. CBP reported that the number of miles under control is expected to increase as Border Patrol continues to put in place additional resources based on risk, threat potential, and operational need. CBP had planned to implement its northern border strategy and reinforce overall security of the northern border over the next 4 years with a range of initiatives involving increased staffing, cutting-edge technology, increased infrastructure, and enhanced interagency partnerships. At the time of our review, however, CBP had not yet issued an implementation plan because it was unclear how CBP’s strategy for the northern border may change in response to the recently issued QHSR and a departmentwide strategy for the northern border, scheduled for issuance later this year. Border Patrol’s National Strategy states that, in part, reliance on border fencing and personnel help secure control over the southern border, while on the northern border, partnerships and the sharing of intelligence are critical to success. While CBP’s Northern Border Strategy states that these partnerships are crucial to securing the northern border, our review of the 2010 ORBBPs for the Blaine, Spokane, Detroit, and Swanton sectors showed that these sectors had identified various levels of additional personnel, technology, and infrastructure necessary to increase border control, but did not identify the extent that partnerships and their resources were available to address border vulnerabilities. Under Operation Stonegarden, DHS provided approximately $11.2 million in 3-year grants to northern border state, local, or tribal governments to augment Border Patrol staff and resources on the border in fiscal year 2010. However, the extent that these additional staff and resources addressed border security vulnerabilities in the four sectors we visited was not reflected in the ORBBPs. The IBET for the Spokane sector maintained a centralized listing of resources available among its partners, including cameras, satellite phones, and ground sensors, that Border Patrol also requested in its ORBBP. However, Border Patrol did not reflect the availability of these partner resources to address border security vulnerabilities in the sector. One reason why partner contributions are not identified and assessed is because Border Patrol guidance does not require partner resources to be incorporated into Border Patrol security assessments, or in documents that inform the resource planning process. The ORBBPs state the importance of partnerships to border security, and list federal, state, local, and international partners in the sector. However, partner resources that were available to address border security gaps in each sector were not identified despite DHS investment in these efforts. We previously reported that federal agencies must identify ways to deliver results more efficiently and in a way that is consistent with multiple demands and limited resources. To do this, we reported that agencies should, in part, identify the personnel, technology, and infrastructure resources available among the collaborating agencies to help identify opportunities to address different levels of resources by leveraging across partners, thus obtaining benefits that would not be available if they were working separately. CBP officials acknowledged the need to link partnership results to border security goals, but said that the methodologies for border security assessments and resource requirements documented in the ORBBP were designed to be Border Patrol–centric. As such, the processes in place reflect the extent that Border Patrol, exclusive of its partners, had sufficient resources to detect, apprehend, and achieve an effective law enforcement resolution. One reason these officials said that partner contributions are excluded is that the ORBBP is used as a basis for sector budget requests. Therefore, including partner resources could disadvantage individual sectors, the Office of Border Patrol, and CBP in the DHS resource allocation process. However, Border Patrol may still benefit from identifying partner resources separately from their budget requests so they have a better understanding of the resources available to them to help secure the border. Another reason cited by officials for excluding partner resources is that these partners are not under the control of Border Patrol, and therefore cannot be relied upon to sustain the border security mission. As such, Border Patrol requires a set of resources that are not at risk of being deployed away from the border if partners have a higher priority or competing mission. Although these partners’ resources may have competing missions, they are intended to supplement not sustain the border security mission. However, identifying how these partner resources and contributions could supplement Border Patrol’s efforts on the border could better position CBP to target coordination efforts and make more efficient resource allocation decisions. Moreover, including partner resources in their assessments could better demonstrate the extent to which their coordination efforts can address border security gaps. The Standards for Internal Control in the Federal Government state that periodic comparison and accountability for resources should be made so that agencies can provide reasonable assurance that their objectives are being achieved through the effective and efficient stewardship of public resources. Additionally, we previously reported that DHS has not fully responded to a legislative reporting requirement to link its initiatives— including partnerships—to existing vulnerabilities to inform decisions on federal resource allocations. The Implementing Recommendations of the 9/11 Commission Act of 2007 required the Secretary of Homeland Security to submit a report to Congress that addressed the vulnerabilities along the northern border, and provide recommendations and resources that would be required to address them. Our review of the resulting DHS report submitted to Congress in November 2008 showed a listing of threats, vulnerabilities, and DHS initiatives to address them, but information was not provided to link this information and determine the resources needed to address the remaining security gaps. Our recommendation to DHS to provide more specific information in these areas in future reports to Congress remains unaddressed. Border Patrol and CBP initiatives to update their resource planning methodology and performance measures provide an opportunity to link the benefits of partnerships to border security. Border Patrol is developing a new methodology for its resource planning documents that could be used to identify the capacity of partners to fill border security gaps. Defined as an Analysis of Alternatives, this methodology calls for field commanders to identify alternatives for achieving border control—other than the resources requested in their resource planning documents. According to DHS’s Office of Policy, this kind of analysis will directly support efforts at the department level to bring strategy and resource allocation into closer alignment, including analysis of capability requirements derived from the strategy. As Border Patrol continues to refine the guidance and policy supporting this effort, considering the extent that this process, among others, could be used to assess available partner resources and potentially leverage such resources to fill Border Patrol resource gaps could better position CBP to target coordination efforts and make more efficient resource decisions. Moreover, current measures of partnerships having a positive effect on border security goals are focused on staff and resources CBP provides to partnerships, rather than how the benefits of partnerships address border security gaps. CBP officials acknowledged the limitations of the measures and plan to enhance them pending changes that may be forthcoming in its larger effort to realign measures under a departmentwide strategy for the northern border. Securing the nation’s vast and diverse northern border is a daunting task. The nature, size, and complexity of the border highlights the importance of international, federal, state, local, and tribal entities working together to enhance security. Northern border partners reported benefiting from collaboration through interagency forums and joint operations, which have enhanced coordination by facilitating the sharing of intelligence and leveraging of resources between the northern border partners. However, DHS oversight of the forums sponsored by DHS could help address concerns identified by multiple partners and working groups that a lack of attention may result in duplication of efforts across the northern border and inefficient use of partners and their limited resources. Additionally, the challenges we have identified with northern border coordination between DHS components and among federal partners emphasizes the need to establish oversight of MOU compliance between Border Patrol and ICE, and underscores the importance of implementing past recommendations to ensure oversight that reinforces accountability when establishing a partnership through a written agreement. We have previously recommended that ICE and DEA, as well as Border Patrol and Forest Service, take the necessary steps to uphold implementation of their MOUs. As a result of our work, we believe it is important for these agencies to follow through with the recommendations so as to achieve an effective and coordinated approach to address border security issues. While DHS has planning efforts underway to streamline northern border security efforts internally and across its northern border partners, until such plans are implemented, coordination challenges could be preventing partners from receiving vital information needed to effectively secure the border. Finally, by excluding partner resources available to address border security gaps in its assessment of northern border needs, DHS may be missing opportunities to target coordination efforts and make more efficient resource decisions. Integrating partner resources in the DHS resource planning process, whether through Border Patrol’s Analysis of Alternatives or other means, may provide a more complete picture of border security status and resource requirements on the northern border. Developing policy and guidance to assess the integrated capacity of all northern border partners could also assist DHS in achieving the vision in its QHSR to establish a strategic framework for homeland security that guides all northern border partners to a common end. To help ensure DHS is maximizing the benefits of its coordination efforts with northern border partners through interagency forums, documented agreements, and its resource planning process, we recommend that the Secretary of Homeland Security take the following three actions: Provide DHS-level guidance and oversight for interagency forums established or sponsored by its components to ensure that the missions and locations are not duplicative and to consider the downstream burden on northern border partners. Provide regular DHS-level oversight of Border Patrol and ICE compliance with the provisions of the interagency MOU, including evaluation of outstanding challenges and planned corrective actions. Direct CBP to develop policy and guidance necessary to identify, assess, and integrate the available partner resources in northern border sector security assessments and resource planning documents. We provided a draft of this report to DHS, USDA, DOD, DOI, and DOJ for their review and comment. In commenting on our draft report, DHS concurred with our recommendations and described actions underway or planned that may directly or indirectly serve to address them. In regard to our first recommendation, DHS stated that the structure of the department precludes using a single headquarters organization to provide DHS-level guidance and oversight for interagency forums established by its components. Instead, DHS said it will review the inventory of interagency forums through its strategic and operational planning efforts to assess efficiency and identify challenges consistent with the forthcoming DHS Northern Border Strategy that will better integrate, coordinate, and achieve northern border management missions. Within the context of these higher-level efforts and any subsequent tactical or operational assessments or planning, we encourage DHS to provide the guidance and oversight necessary to ensure that missions and locations of these forums are not duplicative and consider the downstream burden on northern border partners. In regard to our second recommendation that DHS provide oversight of Border Patrol and ICE compliance with the MOU, DHS stated that it will recommend that the ICE-CBP Coordination Council be resumed, and that proper use of the Coordination Council would enable the recommended DHS-level body to review and evaluate both Border Patrol and ICE compliance with the MOU. We note that in the past, the Coordination Council was unable to improve upon the long-standing coordination challenges between Border Patrol and ICE. Thus, to be effective, a resumed Coordination Council may require changes to its previous structure, although determining what those changes should be was beyond the scope of this study. Nevertheless, we encourage DHS headquarters to actively work with the Coordination Council and provide the oversight necessary to address the MOU compliance issues identified in our report. Finally, DHS stated that our third recommendation to develop policy and guidance to identify, assess, and integrate partner resources in northern border security assessments and resource planning would be resolved through formulation of new policy and guidance resulting from three foundational documents to be issued later this year; namely, the departmentwide strategy for the northern border, the Northern Border Strategy Implementation Plan, and the Shared Vision for Perimeter Security and Competitiveness between the United States and Canada. We encourage DHS to ensure that within the context of these higher-level strategic efforts and any subsequent tactical or operational assessments or planning, CBP provide consistent policy and guidance on integrating partner resources to help ensure that DHS is maximizing the benefits of its coordination efforts. In commenting on our draft report, USDA agreed with our recommendations and stated that it will continue to work closely with DHS to support northern border efforts and take the actions necessary to make certain personnel at all levels of the agency implement provisions of the interagency MOU. DOD, DOI, and DOJ did not have formal comments on our draft report. DHS, DOD, and DOJ provided technical comments, and we obtained technical comments on selected text from state and Canadian officials. We incorporated these technical comments as appropriate. Appendix II contains written comments from DHS. Appendix III contains written comments from USDA. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Secretary of Homeland Security, the Attorney General, and interested congressional committees as appropriate. In addition, this report will be available at no charge on GAO’s Web site at http://www.gao.gov. For the purposes of this review, we interviewed Department of Homeland Security (DHS) headquarters officials with knowledge of DHS coordination efforts and also interviewed federal, state, local, tribal, and Canadian field-level officials in the four sectors we visited—Blaine, Spokane, Detroit, and Swanton—with a nexus to security efforts along the northern border to obtain their perspective on DHS coordination efforts. For information related to the two interagency forums in our review—the Integrated Border Enforcement Team (IBET) and the Border Enforcement Security Task Force (BEST)—as shown in table 4 below, we interviewed 18 U.S. federal and Canadian law enforcement officials participating in the IBET or the BEST, or both, across the four sectors. To obtain information on the northern border joint operations, we interviewed officials in 19 offices who participated in one or more of these operations across the Blaine, Spokane, Detroit, and Swanton sectors. These officials represented 2 Canadian offices, 9 U.S. federal offices, 7 state and local offices, and 1 tribal office. See table 5 below. Officials in 30 federal, state, local, and Canadian offices across the four sectors we visited, shown in table 6 below, provided general information on the challenges of interagency forums. In addition to the contact named above, Cindy Ayers, Assistant Director, and Dawn Locke, analyst-in-charge, managed this assignment. Susan Czachor, Josh Diosomito, and Kelly Liptan made significant contributions to the work. David Alexander assisted with the design and methodology, and Frances Cook provided legal support. Jessica Orr, Robert Robinson, Debbie Sebastian, Neil Asaba, Carolyn Blocker, Lisa Canini, and Richard Eiserman assisted with report preparation. Border Security: Additional Actions Needed to Better Ensure a Coordinated Federal Response to Illegal Activity on Federal Lands. GAO-11-177. Washington, D.C.: November 18, 2010. Information Sharing: Federal Agencies Are Sharing Border and Terrorism Information with Local and Tribal Law Enforcement Agencies, but Additional Efforts Are Needed. GAO-10-41. Washington, D.C.: December 18, 2009. Homeland Security: DHS Has Taken Actions to Strengthen Border Security Programs and Operations, but Challenges Remain. GAO-08-542T. Washington, D.C.: March 6, 2008. Homeland Security: Federal Efforts Are Helping to Alleviate Some Challenges Encountered by State and Local Information Fusion Centers. GAO-08-35. Washington, D.C.: October 30, 2007. Border Security: Security Vulnerabilities at Unmanned and Unmonitored U.S. Border Locations. GAO-07-884T. Washington, D.C.: September 27, 2007. Homeland Security: Opportunities Exist to Enhance Collaboration at 24/7 Operations Centers Staffed by Multiple DHS Agencies. GAO-07-89. Washington, D.C.: October 20, 2006. Border Security: Opportunities to Increase Coordination of Air and Marine Assets. GAO-05-543. Washington, D.C.: August 12, 2005. Combating Alien Smuggling: Opportunities Exist to Improve the Federal Response. GAO-05-305. Washington, D.C.: May 27, 2005. Managing for Results: Barriers to Interagency Coordination. GAO/GGD-00-106. Washington, D.C.: March 29, 2000.
The challenges of securing the U.S.-Canadian border involve the coordination of multiple partners. The results of the Department of Homeland Security's (DHS) efforts to integrate border security among its components and across federal, state, local, tribal, and Canadian partners are unclear. GAO was asked to address the extent to which DHS has (1) improved coordination with state, local, tribal, and Canadian partners; (2) progressed in addressing past federal coordination challenges; and (3) progressed in securing the northern border and used coordination efforts to address existing vulnerabilities. GAO reviewed interagency agreements, strategies, and operational documents that address DHS's reported northern border vulnerabilities such as terrorism. GAO visited four Border Patrol sectors, selected based on threat, and interviewed officials from federal, state, local, tribal, and Canadian agencies operating within these sectors. While these results cannot be generalized, they provided insights on border security coordination. According to a majority of selected northern border security partners GAO interviewed, DHS improved northern border security coordination through interagency forums and joint operations. Specifically, interagency forums were beneficial in establishing a common understanding of security, while joint operations helped to achieve an integrated and effective law enforcement response. However, numerous partners cited challenges related to the inability to resource the increasing number of interagency forums and raised concerns that some efforts may be overlapping. While guidance issued by GAO stresses the need for a process to ensure that resources are used effectively and efficiently, DHS does not oversee the interagency forums established by its components. DHS oversight could help prevent possible duplication of efforts and conserve resources. DHS component officials reported that federal agency coordination to secure the northern border was improved, but partners in all four sectors GAO visited cited ongoing challenges sharing information and resources for daily border security related to operations and investigations. DHS has established and updated interagency agreements, but oversight by management at the component and local level has not ensured consistent compliance with provisions of these agreements, such as those related to information sharing, in areas GAO visited. As a result, according to DHS officials, field agents have been left to resolve coordination challenges. Ongoing DHS-level oversight and attention to enforcing accountability of established agreements could help address long-standing coordination challenges between DHS components, and further the DHS strategic vision for a coordinated homeland security enterprise. Border Patrol--a component of DHS's U.S. Customs and Border Protection--reported that 32 of the nearly 4,000 northern border miles in fiscal year 2010 had reached an acceptable level of security and that there is a high reliance on law enforcement support from outside the border zone. However, the extent of partner law enforcement resources available to address border security vulnerabilities is not reflected in Border Patrol's processes for assessing border security and resource requirements. GAO previously reported that federal agencies should identify resources among collaborating agencies to deliver results more efficiently and that DHS had not fully responded to a legislative requirement to link initiatives--including partnerships--to existing border vulnerabilities to inform federal resource allocation decisions. Development of policy and guidance to integrate available partner resources in northern border security assessments and resource planning documents could provide the agency and Congress with more complete information necessary to make resource allocation decisions in mitigating existing border vulnerabilities. GAO is recommending that DHS enhance oversight to ensure efficient use of interagency forums and compliance with interagency agreements; and develop guidance to integrate partner resources to mitigate northern border vulnerabilities. DHS concurred with our recommendations.
Federal power marketing administrations (PMAs) are part of the Department of Energy (DOE). The five PMAs sell electric power within 34 states—to all states except those in the Northeast and upper Midwest. They sold about 3 percent of the nation’s electric power output in 1994. Almost all of it is hydroelectric power generated by multiple-purpose dams built and operated by other federal agencies. The Chairman, Subcommittee on Water and Power Resources, House Committee on Resources, and the Ranking Minority Member, House Committee on Resources, asked us to review several issues relating to three of these PMAs—Southeastern, Southwestern, and Western. The primary focus of our review was to determine whether all power-related costs incurred through September 30, 1995, have been recovered through the PMAs’ electricity rates; whether the financing for power-related capital projects is subsidized by the federal government and, if so, to what extent; and how PMAs differ from nonfederal utilities and the impact of these differences on power production costs. In addition, we were asked to provide information on Federal Energy Regulatory Commission (FERC) oversight of the PMAs. Nationwide, there are five PMAs—the three on which this report is focused, plus the Alaska Power Administration and the Bonneville Power Administration. Established between 1937 and 1977, PMAs sell electricity primarily on a wholesale basis with the legislated goal of encouraging widespread use of power at the lowest possible cost to consumers consistent with sound business principles. By law, they are required to give priority in the sale of federal power to public power entities, such as public utility districts, municipalities, and customer-owned cooperatives. These customers are referred to as “preference customers.” PMAs helped make electricity available for the first time to many consumers who lived in rural areas. PMAs generally control and operate power transmission facilities, but do not control or operate the facilities that actually generate electric power. These power generating facilities are controlled by other federal agencies—most often by the Department of the Interior’s Bureau of Reclamation (Bureau) or the Department of the Army Corps of Engineers (Corps). The dams at which the power generating facilities are located also serve a variety of nonpower purposes, including flood control, irrigation, navigation, and recreation. The project must be operated in a way that balances all of these uses—and, in many instances, power is not the primary use. Responsibility for operating the facilities to serve all of these multiple functions rests with the Corps and the Bureau, which are called the “operating agencies.” Unlike most other federal agencies, PMAs are required by law to recover through rates funds appropriated for power-related costs. Funding for the three PMAs is generally through the annual appropriations process. The PMAs receive annual appropriations and make both capital expenditures, such as for PMA-controlled transmission facilities, and operating and maintenance (O&M) expenditures. PMAs generally pay for these expenditures by requesting Treasury to cut checks on their respective appropriation accounts. The operating agencies also receive appropriations. The operating agencies allocate the portions of those appropriations that are used to fund power-related capital and O&M expenses to the PMAs for recovery from power rates. The allocated portion includes all capital costs and O&M expenses that are solely related to the generation of power. In addition, a portion of the operating agency’s “joint costs” are allocated to the PMAs. These are capital costs and O&M expenses related not only to power production but to the dam’s other purposes. The operating agencies allocate the amount of joint costs that are power-related by applying a percentage established for each multiple-purpose project. PMAs recover these appropriations through revenues generated from power sales. The Reclamation Project Act of 1939 and the Flood Control Act of 1944 require PMAs to set power rates at levels that are forecasted as adequate to recover costs. The Reclamation Project Act of 1939 requires that rates for electric power be adequate to recover the power-related share of construction costs, to include interest charged at a rate of not less than 3 percent. The act also requires recovery of annual O&M costs and “other fixed charges as the Secretary deems proper.” The Flood Control Act of 1944 requires that rates for electric power be adequate to recover the cost of “producing and transmitting such electric energy.” Power-related capital costs are to be recovered “over a reasonable period of years.” These legislative provisions have been implemented by the Department of Energy in DOE Order RA 6120.2 (September 20, 1979, as revised on October 1, 1983). This order specifies that the total revenues of any project administered by a PMA must be sufficient to recover O&M costs in the year incurred, to recover federal investment in generation and transmission facilities within a 50-year period, and to recover capital costs allocated to completed Bureau of Reclamation irrigation facilities that are beyond the capability of irrigators to repay (also called “irrigation assistance”). Under the order, capital investments have a longer recovery period than O&M costs. PMAs are generally required to recover, without interest, appropriations used to fund O&M costs in the same year that the expenses are incurred. In contrast, the PMAs are required to recover appropriations that fund capital investments (which we refer to as appropriated debt), with interest, over a specified repayment period. The recovery period is generally 50 years for assets used to generate power and 35 to 45 years for assets used to transmit power. The order specifies that the adequacy of power revenues be tested by the preparation of an annual study, known as a “power system repayment study,” which is submitted by the PMAs for approval to the Secretary of Energy. This study forecasts power-related capital and O&M costs that the PMAs will be required to recover in the future. It also forecasts revenues expected to be forthcoming under current rates. If the study projects that revenues will not be adequate to recover power system costs over the remainder of the repayment period, rates may be increased or other cost recovery actions may be taken. During the year, PMAs generate revenues based on the rates they have established in accordance with the power repayment studies. The three PMAs bill customers for power sales. Southeastern’s and Southwestern’s customers generally make payments directly to a U.S. Department of Treasury “lock box” at a bank. The bank processes the account payments and transfers the cash to Treasury’s General Fund, where it is categorized as miscellaneous receipts. To finance their operations, Southeastern and Southwestern request Treasury to cut checks on their respective appropriations accounts. Western and its customers deposit collections directly to Treasury’s “lock box” or federal reserve bank and then the receipts are posted to various Treasury accounts. Western either seeks annual appropriations from these accounts to finance its operations, or for certain accounts has the legal authority to spend funds without further appropriations. Those Treasury accounts include the Reclamation Fund; Colorado River Dam Fund; Boulder Canyon Project Fund; Falcon and Amistad Operating and Maintenance Fund; Central Valley Project Restoration Fund; Lower Colorado River Basin Development Fund; Upper Colorado River Basin Fund; and Colorado River Basins Power Marketing Fund. In this report, we refer to the recovery from revenues of power-related operating and maintenance appropriations and capital construction costs as a “repayment” or “payment” to Treasury, even though in most cases the PMAs do not write a check or otherwise transfer funds to Treasury. Ideally, over the course of a year, collections received by Treasury will offset, or “repay,” amounts appropriated to the PMAs and operating agencies for O&M expenses, as well as an amortized amount of capital construction costs. The PMAs, pursuant to the DOE Order, monitor expenses and revenues to ensure that power rates are sufficient to generate revenue to recover expenses. The DOE Order prescribes the sequence in which PMAs are to offset expenses with revenues as follows: (1) operations and maintenance, (2) purchased and exchanged power, (3) transmission services, and (4) interest. The remaining revenues are to be applied to the balance due on any payments of annual expenses that have been deferred (these are called “deferred payments,” which the Order requires be repaid with interest) and then toward the repayment of capital investments. The Order also covers other subjects, including priority of capital cost repayment, interest rate calculation, and other PMA ratemaking and accounting criteria. Collectively, Southeastern, Southwestern, and Western Area Power Administrations market power in 30 states. (See figure 1.1.) In fiscal year 1995, they had total power sales of almost $1 billion. The power they sell is produced at 102 power plants built and run primarily by the Corps of Engineers or the Bureau of Reclamation. The three PMAs differ substantially in size and revenue. (See table 1.1.) Western is the largest, accounting for more than four times the revenue of either Southeastern or Southwestern. Southwestern and Western have their own transmission facilities, while Southeastern relies entirely on the transmission services of other utilities. Collectively, the three PMAs are responsible for repaying about $5.4 billion of appropriated debt. (See table 1.2.) For 1995, the weighted average interest rate on this outstanding debt was 4.9 percent. (See chapter 3 for a more detailed discussion of appropriated debt balances and weighted average interest rates.) Additional specific information about each PMA follows. Southeastern. The Southeastern Power Administration was created in 1950 to market federal power on a wholesale basis. The 23 hydroelectric power plants from which Southeastern markets power are all operated by the Corps. About half of the plants (with more than 60 percent of the generating capacity) have been added since 1960. In 1995, Southeastern marketed power to 296 customers. In all, it sold about 6.8 billion kilowatthours (kWh) of energy. The percentage of cost allocated to power by the Corps averages about 69 percent and ranges by facility from about 45 percent to about 81 percent. Because it has no transmission lines of its own, it has no transmission-related investment costs to recover. Southwestern. The Southwestern Power Administration was created in 1943. The 24 hydroelectric power plants from which Southwestern markets wholesale federal power are all operated by the Corps. Slightly less than two-thirds of the plants (and 56 percent of the capacity) have been added since 1960. In 1995, Southwestern marketed power to 95 customers, selling about 7.7 billion kWh of energy. The percentage of cost allocated to power by the Corps averages about 35 percent and ranges by facility from about 21 percent to about 68 percent. Southwestern’s investment in transmission facilities as of September 30, 1995, was about $126 million. Western. The Western Area Power Administration was created in 1977. The establishing legislation transferred power marketing responsibilities and transmission assets previously managed by the Bureau of Reclamation to Western. Western markets power, on a wholesale basis, from 55 hydroelectric power plants. The Bureau operates 45 plants, the Corps operates 6, and the remaining 4 are operated by three other organizations.Western also markets the federal government’s share of electricity generated by the coal-fired Navajo Generating Station in Arizona. In 1995, Western marketed power to 546 customers, selling about 32.8 billion kilowatthours of energy. The percentage of cost allocated to power by the operating agencies for three large projects that Western is responsible for averaged about 50 percent. These three projects accounted for about 83 percent of Western’s 1995 revenues. The individual cost allocations for the three projects were 21 percent, 46 percent, and 84 percent. Western’s investment in transmission facilities as of September 30, 1995, was about $2.1 billion. Each PMA is led by an administrator, who is appointed by the Secretary of Energy. The administrator is authorized to make decisions regarding PMA operations, subject to the supervision and direction of the Secretary. DOE oversight includes approving PMA budgets as part of DOE’s annual federal budget process, establishing each PMA’s personnel limit, and giving interim approval to rate adjustments that the PMA recommends. The PMA financial officers typically participate in the determination of rates. The final approval of PMA rates is the responsibility of FERC. Appendix VI discusses FERC oversight in detail. The Department of Energy’s Office of Inspector General has programmatic oversight responsibility for the PMAs, as well as oversight of the PMAs’ financial accountability. DOE Order RA 6120.2 calls for the PMAs to prepare annual reports containing audited financial statements. The Inspector General retains Independent Public Accountants to perform annual audits of these financial statements. Increasing competition in the wholesale electricity market could have a major impact on the PMAs. Historically, investor-owned utilities (IOUs) and other electricity providers have operated as regulated monopolies. IOUs typically are required to provide electric service to all customers within their power service areas in exchange for exclusive service territories. To serve customers, utilities incur costs for building new generating plants and operating the power system. Through electricity rate charges, IOUs generally recover all costs incurred plus a regulated rate of return. Several key laws have resulted in an increasingly competitive electricity market. The Public Utilities Regulatory Policies Act of 1978 (PURPA) facilitated the creation of small (less than 80 megawatts of capacity) electricity generators that were exempt from many federal and state regulations. Called “nonutility generators” or “independent power producers” (IPPs), these entities typically use new technologies, such as cogenerating plants or natural gas-fired generation units, to generate power. The National Independent Energy Producers estimated that, at the end of 1995, IPPs accounted for about 8 percent of the total generating capacity in the United States. IPPs pose a direct competitive threat to PMAs, IOUs, and other utilities, in part because they can build generation facilities near large industrial or municipal customers and sell power to these customers for a lower rate than the established utility. In addition, recent technological advances have significantly increased the efficiency of natural gas-fired generation units. The growth and increased efficiency of IPPs have placed downward pressure on wholesale electricity rates. The Energy Policy Act of 1992 promoted increased competition in the electricity market. The act encouraged additional wholesale suppliers to enter the market and opened the transmission of electricity by allowing wholesale electricity customers, such as municipal distributors, to purchase electricity from any supplier, even if that power must be transmitted over lines owned by another utility—referred to as wheeling of power. Fees are paid to the transmitting utility for use of its system. Under the act’s provisions, FERC can compel a utility to transmit electricity generated by another utility into its service area for resale. More recently, FERC has issued a final rule implementing this provision of the act. DOE has directed the PMAs to comply with the intent of the act and FERC’s rule. According to Western and Southwestern, they have always operated with a policy of open access to their transmission systems on a first-come, first-served capacity available basis. As a result of the increased competition, FERC expects wholesale and retail electricity rates to drop. Increased competition may impact the PMAs’ status as a low cost supplier. The objectives of this report were to determine (1) whether all power-related costs incurred through September 30, 1995, have been recovered through the PMAs’ electricity rates (chapter 2), (2) whether the financing for power-related capital projects is subsidized by the federal government and, if so, to what extent (chapter 3), and (3) how PMAs differ from nonfederal utilities and the impact of these differences on power production costs (chapter 4). Additional information on our objectives, scope, and methodology is in appendix I. This appendix includes detailed explanations of the calculations of various estimates used in the report, as well as a list of the various organizations and groups we contacted. When appropriate, we used audited numbers from the PMAs’ 1995, 1994, and earlier annual reports. We conducted our review from January 1996 through September 1996 in accordance with generally accepted government auditing standards. We requested written comments on a draft of this report from the three PMAs, the Department of Energy, and the operating agencies. Only the PMAs provided written comments in time for publication in this report. These comments are evaluated and reprinted in appendix II. Some costs related to producing and marketing federal hydropower are not being recovered through power rates by the three PMAs. We identified five main power-related activities for which costs are not fully recovered. First, the three PMAs do not recover the full costs to the federal government of providing Civil Service Retirement System (CSRS) pensions and postretirement health benefits for current PMA employees and operating agency employees engaged in producing and marketing the power sold by the PMAs. Second, there are construction projects for which the three PMAs might not recover costs from power customers. Third, power-related construction and O&M expenses assigned to incomplete irrigation facilities at Pick-Sloan will likely not be recovered. Fourth, certain costs for environmental mitigation have been legislatively precluded from cost recovery. Finally, Western had unrecovered O&M and interest expenses as of September 30, 1995, related to certain projects. Taking into consideration all these categories of unrecovered costs we identified, we estimated that the amount of unrecovered costs for fiscal year 1995 was about $83 million. We estimated that the cumulative amount of these unrecovered costs, as of September 30, 1995, could be as much as $1.8 billion. It is important to note that the PMAs are generally following applicable laws and regulations regarding cost recovery. The Reclamation Project Act of 1939 and the Flood Control Act of 1944, as discussed in chapter 1, generally require the recovery through power rates of the costs of producing and marketing federal hydropower. However, these acts do not specify which costs are to be recovered. The Reclamation Project Act refers to the recovery of “annual operation and maintenance” costs and “other fixed charges as the Secretary deems proper.” The Flood Control Act refers to the recovery of the costs associated with producing and transmitting electricity from federal power projects. Neither act defines its terminology. Recovery of power-related costs has been implemented by the Secretary of Energy through DOE Order RA 6120.2. The DOE order states that all costs of operating and maintaining the power system, as well as the costs of transmission, should be included in rates. The order does not define operating and maintenance costs. Given the flexibility this lack of specific guidance provides, the PMAs have interpreted it to exclude certain costs from rates. To define the full costs associated with producing and marketing federal hydropower, we referred to Office of Management and Budget (OMB) Circular A-25, “User Fees,” which provides guidance for federal agencies to use in setting fees to recover the full costs of providing goods and services. DOE Order RA 6120.2 does not adopt this guidance or otherwise refer to OMB Circular A-25. Nevertheless, the circular does offer a definition of full costs that is useful in identifying power-related costs that the PMAs do not now recover through power rates. OMB Circular A-25 defines full costs as all direct and indirect costs of providing the goods or service. This definition is consistent with that contained in federal accounting standards recommended by the Federal Accounting Standards Advisory Board (FASAB) and adopted by GAO, OMB, and Treasury. The FASAB standards define the full cost of an entity’s output as “. . . the sum of (1) the costs of resources consumed by the segment that directly or indirectly contribute to the output, and (2) the costs of identifiable supporting services provided by other responsibility segments within the reporting entity, and by other reporting entities.” Applying the definitions of “full cost” used in OMB Circular A-25 and federal accounting standards indicates that the full cost of the electricity sold by the PMAs would include all direct and indirect costs incurred by the operating agencies to produce the power, the PMAs to market and transmit the power, and any other agencies to support the operating agencies and PMAs. Investor-owned and publicly-owned utilities generally must recover the full cost of producing power through rates. A discussion of relevant private industry accounting and cost recovery practices is in chapter 4. It is important to note that we did not assess the reasonableness of the methodologies used in developing the operating agency cost allocation formulas that are established for each project. To more fully assess whether PMA electricity rates include all power-related costs would require an analysis of the reasonableness of these allocations. If the allocation formulas were not reasonable, it could result in a substantial over- or under-allocation of costs by the operating agencies to the PMAs. The three PMAs do not recover the full costs to the federal government of providing postretirement health benefits and CSRS pensions for current PMA employees and operating agency employees engaged in producing and marketing the power sold by the PMAs. The employee and the employing agency both contribute annually toward the costs of the future CSRS pension benefits. Since the employee and agency contributions toward CSRS pensions are less than the full cost of providing the pension benefits, the federal government must, in effect, make up the funding shortfall. In addition, neither the agency nor the employee pays the federal government’s portion of postretirement health benefits, which will eventually be paid by the general fund of the Treasury. For 1995 alone, these unrecovered costs for the three PMAs were an estimated $16.4 million. The cumulative unrecovered CSRS pension and postretirement health benefit costs for the three PMAs totaled an estimated $436 million as of September 30, 1995. According to Office of Personnel Management (OPM) officials, pensions for employees covered by the Federal Employees Retirement System (FERS) are fully funded each year and cumulatively, so there are no relevant unrecovered costs. See appendix I for a discussion of our methodology for computing unrecovered pension and postretirement benefit costs. As with all other federal agencies, the full cost of CSRS pension benefits is not paid by the PMAs or the operating agencies. As required, CSRS employees and the agency each pay a fixed percentage—7 percent—of the employee’s salary to offset future pension costs. However, this combined contribution does not cover the full cost of the employee’s future pension benefits, which amounted to more than 25 percent of salary as of September 30, 1995. Thus, the annual funding shortfall is more than 11 percent of every CSRS employee’s salary. The annual funding shortfall associated with pension benefits will be eliminated over time as CSRS employees leave the government and are replaced with FERS employees, provided that FERS pension benefits remain fully funded annually. The full cost of the federal government’s portion of postretirement health benefits (for both CSRS and FERS employees) is likewise not paid by federal agencies, including the PMAs and operating agencies, during the period of the beneficiaries’ employment. OPM estimates that almost $2,000 per employee would need to have been contributed in fiscal year 1995 to cover each employee’s postretirement health benefit costs earned. However, no fund has been established to accumulate assets to pay for these future benefits, which will eventually be paid for by the federal government. In contrast to the situation regarding CSRS pensions, the annual funding shortfall associated with postretirement health benefits will not be eliminated as CSRS employees are replaced by FERS employees, since it is an entirely separate benefit program. OMB Circular A-25 specifically includes all funded or unfunded retirement costs not covered by employee contributions in its definition of full cost. In addition, beginning in fiscal year 1997, Statement of Federal Financial Accounting Standards (SFFAS) no. 5 requires federal agencies to record the full cost of pension and postretirement health benefits in annual financial statements. Private sector accounting standards have required similar reporting for pensions beginning in 1987 and postretirement health and other benefits beginning in 1993. IOUs have adopted SFAS no. 87 and SFAS no. 106 for accounting purposes and in most instances for rate-setting. Based on our analysis of the estimated number of full-time equivalent (FTE) positions involved in producing and marketing the power sold by the three PMAs, and information provided by OPM, we estimated that the fiscal year 1995 unrecovered pension and postretirement health benefits totaled about $10.3 million and $6.1 million, respectively. For pensions, about $7.3 million of the unrecovered costs (70 percent) related to personnel involved in producing and marketing the power sold by Western, while about $1.7 million (16 percent) and $1.4 million (14 percent) related to Southeastern and Southwestern, respectively. For postretirement health benefits, about $4.2 million of the unrecovered costs (69 percent) related to Western, while about $1.1 million (18 percent) and $786,000 (13 percent) related to Southeastern and Southwestern, respectively. These are the amounts that would have been necessary to fully recover CSRS pensions and postretirement health benefits earned in fiscal year 1995 for current employees of the three PMAs and operating agency employees involved in power production and marketing. These costs, which are not recovered by the PMAs through power rates, are shown in figure 2.1. More detailed information regarding these unrecovered costs can be found in appendix III. Based on our analysis of estimated FTEs associated with producing and marketing power and information provided by OPM, we estimated that the cumulative unrecovered costs for pension and postretirement health benefits as of September 30, 1995, are $355 million and $81 million, respectively. For pensions, about $250 million of the cumulative unrecovered costs (70 percent) related to personnel involved in producing and marketing the power sold by Western, while about $57 million (16 percent) and $48 million (14 percent) related to Southeastern and Southwestern, respectively. For postretirement health benefits, about $56 million of the cumulative unrecovered costs (69 percent) related to Western, while about $14 million (18 percent) and $10 million (13 percent) related to Southeastern and Southwestern, respectively. The cumulative unrecovered costs for current employees are depicted in figure 2.2. More detailed information regarding the cumulative unrecovered costs can be found in appendix III. There are construction costs that the three PMAs might not recover from power customers. In two cases, the Richard B. Russell and Harry S. Truman Projects, costs are not currently being recovered because the power-generating projects have not operated as designed. In two other cases, the Washoe and Mead-Phoenix Projects, the tenuous financial condition of the projects raises questions about whether power costs will be recovered. In another case, power-related costs associated with a Western abandoned transmission line incurred before 1969 have not been included in rates and there is a chance that these costs may never be recovered from power customers. To date, about one-half of the cost of constructing the Richard B. Russell Project, which is located on the Savannah River between Georgia and South Carolina, has been excluded from Southeastern’s rates to power customers because the project has never operated as designed. In addition, interest associated with the pumping units is not paid to Treasury each year. Instead, interest—$25.6 million for fiscal year 1995—is capitalized and added to the construction-work-in-progress (CWIP) balance annually. If the project never operates as designed, it is uncertain whether the federal government will be able to fully recover these construction and capitalized interest costs. Positioned between two existing dams, the Russell Project was built virtually exclusively for the generation of hydropower. Ninety-nine percent of the original construction costs and 93 percent of annual O&M expenses associated with the Russell Project are tentatively allocated to power. The project, which enjoyed broad support from electric utilities in North Carolina, South Carolina, and Georgia because of its potential to generate low cost power, was authorized by the Flood Control Act of 1966 and construction began in 1976. The Russell Project has four operational conventional generating units that provide 300,000 kilowatts of capacity, and four nonoperational pumping units intended to provide another 300,000 kilowatts of capacity. The last of the four conventional units came on-line in 1986, and the costs associated with those units went into the customers’ rate base. However, because of litigation over excessive fish kills, the four pumping units, which were completed in 1992, have never been allowed to operate commercially. As a result, the costs associated with them have been left in a CWIP account, where interest has been accruing, and have not been included in rates. Southeastern’s financial statements show about $488 million in CWIP as of September 30, 1995, all of which is for construction costs and capitalized interest related to the Russell Project. Of the $488 million related to Russell, an estimated $338 million was for construction costs and $150 million for capitalized interest. Southeastern continues to classify as CWIP the $488 million of costs related to Russell’s pumping units, even though construction on those units was completed in 1992 and associated litigation and environmental testing have been ongoing since May 1988. According to its fiscal year 1995 financial statements, Southeastern follows SFAS no. 71, Accounting for the Effects of Certain Types of Regulation. In situations similar to Russell’s, if the costs were deemed allowable by the regulator, private entities following SFAS no. 71 would transfer the amount from CWIP to a regulatory asset account and begin recovering costs. Under DOE Order RA 6120.2 guidance, however, Southeastern may not be required to recover the costs of Russell’s pumping units through rates as long as the units are nonoperational. Southeastern officials believe that the litigation over the pumping units will be resolved in Southeastern’s favor, the pumping units will be allowed to operate commercially, and the costs associated with them will be recovered through rates. However, if the four pumping units are never allowed to operate commercially, it is unclear whether the costs associated with them—about $488 million as of September 30, 1995—will be recovered through power rates. A similar situation exists at the Harry S. Truman Dam and Reservoir, which is located in the Osage River in Missouri. Designed originally for flood control, hydropower and recreation were later added as authorized project purposes. Construction of the Truman Project began in October 1964, and it was placed in service (for flood control and recreation) in November 1979. The in-service dates for hydropower generating units range from January 1980 to September 1982. Total power-related construction costs were about $158 million as of the end of fiscal year 1995. The Truman Project has six generating units, also designed to operate as pumping units, which provide 160,000 kilowatts of capacity. However, because of excessive fish kills by the pumping units, the Truman project has never been operated at its 160,000 kilowatt capacity. Instead, only 53,300 kilowatts have been declared to be in commercial operation, and use of the pump-back facilities has never been commercially implemented. As a result, the Corps determined that it would be inappropriate to recover through Southwestern’s power rates the costs associated with the units that have not been used commercially. The Corps prepared an interim cost allocation for this project that accounted for the fact that the project was not fully operational. Southwestern petitioned FERC to have the cost of the nonproducing portion of the assets deferred from inclusion in power rates until it becomes fully operational. FERC concurred as part of its approval of Southwestern’s 1989 power rates. As a result of FERC’s decision, Southwestern has deferred the inclusion of the estimated amount of the costs associated with the nonoperational units in Southwestern’s reimbursable share of the project’s costs. Thus, $31 million, which consists of capital construction costs and capitalized interest, has been deferred from recovery through power rates, reducing the total to be repaid from $158 million to $127 million. This deferral is accomplished through an adjustment to Southwestern’s appropriated debt each year. According to Southwestern officials, the $31 million adjustment is not a permanent elimination of these costs from Southwestern’s appropriated debt; these costs will be included in rates if the Harry S. Truman facility operates as designed. Through 1994 the Corps calculated the interest expense associated with hydroelectric projects related to Southwestern. Interest expense was based on the entire power-related construction costs of these projects. Southwestern was therefore paying interest on the $31 million Truman deferral. Beginning in fiscal year 1995, Southwestern and the other PMAs began calculating the power-related interest expense on the operating agency projects. In 1995, Southwestern’s calculation of interest expense for the Truman project excluded interest associated with the $31 million Truman deferral. About $930,000 in interest associated with the Truman deferral was therefore not paid and was excluded from Southwestern’s rates. Southwestern officials have acknowledged the error and said that the 1995 underpayment of interest will be corrected in fiscal year 1996. The Washoe Project (Stampede Dam) is not generating sufficient revenue to cover annual power-related O&M expenses and interest and repay the federal investment. The 3,650 kilowatt power plant for the Stampede Dam was completed in 1987, and power sales began in 1988. Since the project began producing power, it has only generated sufficient revenue to cover a portion of its annual O&M expenses and has been unable to make any annual interest payments. In addition, the project has not generated enough revenue to repay any of the project’s appropriated debt. Since 1988, the project has deferred about $3.9 million in O&M and interest expense payments. As of September 30, 1995, the outstanding unpaid federal investment in the project was $8.9 million. According to Western, the project has not been able to recover the costs of producing power because the project: (1) has construction costs that are high in relation to other utilities, (2) has not been able to find customers to purchase the power at a rate that would recover the full cost of producing the power, (3) began producing power in the first year of a 7-year drought, and (4) prior to 1992, lacked the transmission service to wheel power to customers interested in buying the power. Western officials project that a permanent rate increase of almost 500 percent would be necessary to recover the annual costs. In January 1996, Western projected that it would have to sell its Washoe power at a rate of at least 11 cents per kilowatthour (kWh) to cover annual O&M expenses (excluding depreciation), interest charges, and debt repayments; however, in fiscal year 1995, the project was selling power at about 2 cents per kilowatthour. According to Western’s fiscal year 1995 annual report: “Based on current conditions, it is unlikely the project will be able to generate sufficient revenues to repay the Federal investment.” For the same reasons, we believe that the Washoe Project is unlikely to generate sufficient revenue to repay all O&M and interest expenses. During fiscal year 1994, Western negotiated a contract to sell some Washoe power to the U.S. Fish and Wildlife Service (F&WS). The project’s authorizing legislation specifies that the cost of facilities for the development of the fish and wildlife resources of the project area, including the O&M costs, shall be nonreimbursable. Western classified the cost of power sold to F&WS as nonreimbursable, thereby reducing the amount of construction and O&M costs that must be repaid to Treasury by the Washoe Project. Western believes the project can become more financially viable by reclassifying a portion of the project’s costs as nonreimbursable. However, we believe this action just shifts the responsibility for recovering the project’s costs from the ratepayers to the federal government, and does not reduce the actual costs of producing the power. Therefore, we believe this action does not significantly improve the prospects of the project being able to generate sufficient revenue to cover all power-related capital costs or O&M and interest expenses. Another project with questionable financial viability is the Mead-Phoenix Transmission Line, a recent addition to the Pacific Northwest-Pacific Southwest Intertie Project intended to increase power transmission capability between the Pacific Northwest and Pacific Southwest. This transmission project was a joint venture between Western and 13 other participants and began operation in April 1996. Western’s share of the total project’s costs is about 34 percent. According to Western officials, Western’s portion of the cost of the project, including capitalized interest, is expected to be about $94.1 million. Western officials said that, in 1990 and 1993, prospective customers of the Mead-Phoenix Line indicated that their demand for power from the line significantly exceeded Western’s proposed share of capacity. However, anticipated demand for power from the line later dropped precipitously, and it is unclear whether Western will be able to successfully market its entire transmission capacity. A Western official told us that during its first few months of operation in 1996, the project has not generated sufficient revenues to cover all O&M and interest expenses. However, Western is confident that sufficient revenues will be raised to recover annual O&M and interest expenses. In recent testimony before the Subcommittee on Water and Power Resources, House Committee on Resources, Western’s Administrator said that it is aggressively marketing the remainder of the line’s capacity. The Administrator indicated that if the project does not achieve the level of sales assumed in developing the transmission charges, Western will initiate a new rate process to ensure the recovery of project costs. If Western is unable to find customers for all of its capacity, it is uncertain whether market forces will allow it to increase its rates enough to generate sufficient revenue to recover annual O&M and interest expenses or appropriated debt. Another example of an unrecovered power-related cost is an abandoned transmission line that has incurred costs of about $14.5 million, which Western has not included in power rates. According to the Bureau, the transmission line, which was planned to be the direct current portion of the Pacific Northwest-Pacific Southwest Intertie Project, was abandoned because of sporadic funding. Because the project has not provided any benefits to project customers, the ratepayers recently requested that Western seek authority through the budget cycle to have about $11.1 million of the cost of the abandoned transmission line declared nonreimbursable. If Western was granted such authority, the power customers would not be required to recover these costs through rates. However, Western recently asserted that it (1) does not plan to request authority to declare any of the costs of this project as nonreimbursable and (2) plans to include the costs of the abandoned transmission line in its power repayment study for recovery. In addition to not repaying the construction costs, Western has not paid the federal government any interest on this investment since construction began on the project in 1965. In fiscal year 1995, if Western had paid interest at the rate that applied when construction began—3 percent—it would have paid about $435,000 in interest on the $14.5 million. We estimate that if Western had begun repaying the annual interest expense on the project costs when construction was discontinued in 1969, it would have paid the federal government about $6.4 million in annual interest payments over the 26-year period from 1969 to 1996. The potential unrecovered costs as of the end of fiscal year 1995 are about $20.9 million. Because the cost of the abandoned transmission line has not been included in rates since construction was discontinued over 26 years ago, we believe doubt exists about whether these costs will ever be included in rates. However, if these costs are ever taken into rates, it is not clear whether interest will be recovered from the time construction was discontinued in 1969 through when the costs are included in rates. It is also unclear whether the 50-year repayment period will begin in 1969 or when the costs are actually included in the power repayment study. In addition, Western did not disclose which rate-setting system would absorb these costs. Western officials were unable to clarify these issues. The cost to the federal government of Western’s decision to delay resolution of cost recovery for the abandoned transmission line will depend on how it decides to address these issues. As of September 30, 1994, about $454 million of the federal investment in the capital costs for hydropower facilities and water storage reservoirs of the Pick-Sloan Missouri Basin Program (Pick-Sloan) had been allocated to authorized irrigation facilities that are incomplete and infeasible. Western is currently selling electricity to its power customers that would have been used by the irrigators had the irrigation facilities been completed. If these costs had been allocated based on the actual use of the hydropower facilities and water storage reservoirs, the costs would have been allocated primarily to power and repaid through electricity rate charges within 50 years, with interest. If all of the irrigation facilities were to be completed as originally planned, the above capital costs would be repaid without interest primarily by power customers. However, since all but one of these irrigation facilities are not expected to be completed, the capital costs assigned to these facilities will not be repaid unless Congress approves a change in the cost allocation methodology used to distribute costs to the various program purposes, or deauthorizes the incomplete irrigation facilities. However, any changes between the program’s power and irrigation purposes may also necessitate reviewing other aspects of the agreements—specifically, the agreements involving areas that accepted permanent flooding from dams in anticipation of the construction of irrigation projects that are now not likely to be constructed. In addition, interest is not being paid on the $454 million. Using the 3 percent interest rate in effect for power projects when construction began, we estimate that lost interest payments to Treasury amounted to about $13.6 million for fiscal year 1995. The federal investment in the Pick-Sloan Program will continue to increase because of renovations and replacements. The capital costs assigned to the incomplete irrigation facilities will also continue to increase because of the cost allocation methodology, which is based on original agreements reached decades ago that anticipated that all irrigation facilities would be completed as planned. For example, in our May 1996 testimony, we noted that the capital costs assigned to irrigation facilities increased about $37 million between fiscal year 1987 and fiscal year 1994, an average annual increase of nearly $5 million. Therefore, unless Congress approves a change in the cost allocation methodology used to assign capital costs to the various program purposes, ongoing power-related capital costs will continue to be assigned to the incomplete irrigation facilities and will likely not be recovered through rates. Annual O&M expenses that otherwise would have been allocated to power and repaid from electricity rates have also been allocated to the incomplete irrigation facilities. Since 1987, Western has adjusted the Corps’ allocated annual O&M expenses because the two agencies interpret specific legislation differently. As of September 30, 1995, about $13.7 million ($15.3 million in constant 1995 dollars) of the Corps’ power-related O&M expenses had been allocated to incomplete irrigation facilities. The annual adjustments have ranged from a low of $1.1 million in fiscal year 1987 to a high of $2.1 million in fiscal year 1995. If these expenses had been allocated to power, they would have been included in Western’s annual O&M expenses and recovered through electricity rates. The Central Valley Project’s Shasta Dam and the Colorado River Storage Project’s Glen Canyon Dam have incurred power-related environmental mitigation costs that are legislatively excluded from Western’s power rates. For the Shasta Dam, these costs totaled $9.7 million and $5.4 million in 1995 and 1994, respectively. For the Glen Canyon Dam, these costs totaled $13.9 million and $12.5 million in 1995 and 1994, respectively. The total cumulative unrecovered environmental costs for the two projects was about $134.3 million ($152.5 million in constant 1995 dollars) as of the end of fiscal year 1995. Certain environmental costs incurred at the Shasta Dam were exempted from recovery by the 1991 Energy and Water Development Appropriations Act. The act included a provision stating that any increase in purchased power cost incurred by Western after January 1, 1986, that resulted from bypass releases for temperature control purposes related to preservation of fisheries in the Sacramento River, not be allocated to power. According to Western, the bypass releases at Shasta will cease when construction of a Temperature Control Device is completed. Western expects this device to be in service by December 1996. Similarly, certain costs of mitigating the environmental impact of fluctuating river flows at the Glen Canyon Dam were exempted from recovery by the Grand Canyon Protection Act of 1992. The purpose of the act was to “protect . . . and improve the values for which Grand Canyon National Park and Glen Canyon National Recreation Area were established.” The act states that certain costs of environmental impact studies related to Glen Canyon Dam are not to be paid for by power customers. The act includes a provision that the above costs could become the responsibility of the power customers under certain circumstances. According to Western, sufficient data does not exist to determine whether the overall provisions of the act would result in a future obligation by the power customers. Western plans to reflect any future obligations related to these costs in the period in which such obligations become evident. Since fiscal year 1975, Western has deferred O&M and/or interest payments on 12 projects that are supposed to be repaid annually. Under DOE Order RA 6120.2, deferred O&M and interest payments are to be repaid the following year, with interest, at DOE policy rates, before repayment of appropriated debt. In effect, the federal government extends an interest bearing loan to the PMAs in the amount of the deferred payments. The balance of Western’s deferred payments outstanding at the end of fiscal year 1995 was about $196 million. This balance decreased from about $250 million at the end of fiscal year 1994 as Western repaid about $54 million in fiscal year 1995. The bulk of the balance outstanding—almost $131 million—was associated with the Pick-Sloan Program. The remaining balance was associated with eight other projects. According to Western, the deferred payments have occurred primarily because of extended drought conditions. As a result of the deferred payments, many of the projects’ firm power rates have been raised by Western. For example, Western stated that the composite firm power rate at the Pick-Sloan Program has increased approximately 75 percent since the start of drought conditions in 1988. Western attributes about half of the increase to the drought and the increased interest expense associated with the deferred payments and the failure to repay outstanding appropriated debt. Although Southeastern and Southwestern have deferred O&M and interest expense payments, both had repaid the amounts, with interest, prior to September 30, 1995. Because of the PMAs’ reliance on hydropower to generate electricity, the PMAs’ annual revenue is unpredictable and varies from year to year. As a result, the DOE order that specifies the terms PMAs must follow to repay their federal investment was designed with the variable revenue characteristics of hydroelectric systems in mind. The DOE order allows the PMAs to vary the repayment of their federal investments and miss interest and/or O&M expense payments in years when revenue is not sufficient to cover these costs. However, the DOE regulations require the PMAs to record deferred annual payments as liabilities on their financial statements and to repay these deferred payments plus interest in future years before any principal payments are made on the outstanding federal investment. The amount and frequency of deferred payments over the last 20 years have varied among the three PMAs. Since fiscal year 1975, Western has deferred either an annual O&M and/or interest expense payment in one or more years for 12 of the 15 projects. As of September 30, 1995, 9 of the 15 projects still had about $196 million in outstanding debt related to deferred payments. Western plans to recover the majority of these costs over time. More detailed information about Western’s deferred payments over the last 20 years can be found in chapter 3 and appendix IV, and a discussion of FERC’s role in rate-setting can be found in appendix VI. According to Southeastern officials, severe drought conditions in the 1980s created poor water conditions and, as a result, insufficient revenue to cover annual interest and O&M payments. Southeastern had also deferred payments in other years due to poor water conditions. Southwestern deferred interest payments in 1977 and O&M and interest payments in 1981. According to Southwestern officials, the payments were deferred primarily because of poor water conditions. Both Southeastern and Southwestern had repaid all their deferred payments as of the end of fiscal year 1995. We estimate that, for the five main power-related activities identified in this chapter, the annual unrecovered costs for the three PMAs is about $83 million for fiscal year 1995. In addition, as of September 30, 1995, we estimate that total cumulative unrecovered power costs could be as much as $1.8 billion. Our analysis of unrecovered power-related costs is shown in table 2.2. In commenting on a draft of this report, the PMAs stated that they agree that there are some power-related costs that were not fully recovered through rates. However, they asserted that the objective of our review was to specifically identify costs that were “unrecoverable,” which they defined as those that have not been and will never be repaid to Treasury under current law and/or policy, as opposed to “unrecovered,” which they defined as those not repaid at a point in time but that will be in the future. While we recognize there is a distinction between the two concepts, we believe that “unrecoverable” costs are essentially a subset of “unrecovered” costs. Moreover, we disagree with the PMAs’ assertion about the objective of our review. The objective, based on our agreements with congressional requester staff, was to determine whether all power-related costs incurred through September 30, 1995, had been recovered through electricity rates. Our objective was not to distinguish between “unrecovered” and “unrecoverable” costs. We have clarified the discussion of our objective in the executive summary and other relevant sections of the final report. In addition, the PMAs disagreed with certain of our characterizations of unrecovered costs in the five main categories discussed in this chapter. These points, and our responses, are discussed below and in appendix II. The PMAs agreed that the full costs of these benefits are not included in PMA power rates. They suggested that we more fully reflect the content of this chapter in our executive summary by noting therein that the cost underrecovery associated with CSRS pensions should go away over time as CSRS employees retire and the federal workforce is comprised of employees covered by FERS, which is fully funded annually. In response, we added an explanatory statement to the executive summary. However, we also note in our executive summary that the unrecovered costs associated with postretirement health benefits will not be eliminated after the shift from CSRS to FERS. In addition, the PMAs believe that they cannot deposit power revenues into the Civil Service Retirement and Disability Fund (Fund) to pay for unfunded retirement benefits, because doing so would violate federal appropriations law by augmenting the annual appropriation made to the Fund. Our objective was not to address whether the PMAs should or should not recover these costs; our objective was to determine whether these costs were unrecovered. Consequently, we did not address whether it would be appropriate for the PMAs to deposit power revenues directly into the Fund to pay for these costs. We agree that should the Congress decide that the PMAs should deposit directly into the Fund an amount to cover these costs, the Congress should enact legislation permitting a transfer of that amount into the Fund. Alternatively, the augmentation issue could be avoided by depositing amounts recovered, like many other PMA ratepayer collections, into the General Fund of the Treasury where the revenue would be available to the Congress to appropriate into the Fund to cover the full cost to the government of CSRS pensions. Recovery of postretirement health benefits could be handled the same way. The PMAs also believe that our reference to OMB Circular A-25 in this chapter was improper, because the PMAs do not recover costs in accordance with the Circular. We agree that the PMAs do not follow Circular A-25, and we note in this chapter that recovery of power-related costs has been implemented through DOE Order RA 6120.2, which does not adopt the guidance in Circular A-25 or otherwise refer to it. We do not state that the PMAs are required to follow Circular A-25; instead, we use the Circular as criteria for defining all the costs associated with producing and marketing federal hydropower. Developing such a definition of full costs was necessary before assessing whether the PMAs were recovering all power-related costs through rates, which was one of the objectives of our review. The PMAs believe that we inappropriately characterized the costs associated with nonoperational projects, specifically Russell and Truman. They assert that we characterized those costs as not only unrecovered but also likely never to be recovered. That assertion is not accurate. Regarding the Russell Project, in our draft report we state that, if the nonoperational pumping units are never allowed to operate commercially, the costs associated with their construction will likely not be recovered. We do not state that it is likely that the units will not be allowed to operate commercially. We only point out the fact that the units have been in CWIP for 20 years and litigation has been ongoing since 1988. We believe these facts demonstrate that the ultimate operation of the Russell pumping units is not a certainty. Moreover, we specifically reiterate Southeastern management’s belief that the pumping units will be allowed to operate commercially and that these costs will be recovered in the future. However, in response to the PMAs’ concerns, we revised the final report to state that it is unclear whether these costs will be recovered if the project never operates to the capacity designed. Regarding the Truman Project, we state that, with FERC’s concurrence, certain costs associated with nonoperational pumping units have been deferred from power rates. We do not state that it is likely that the costs will never be recovered. We merely demonstrate that the ultimate operation of these pumping units is not a certainty. Moreover, we specifically state Southwestern management’s belief that the costs will be recovered if the facilities become operational. The PMAs state that we should incorporate into the report the similarity of Southeastern’s handling of the Russell Project’s cost recovery to similar situations for other utilities governed by FERC and state public utility commissions. As discussed in chapter 4, we agree that FERC and state public utility commissions disallow certain costs and that shareholders of IOUs, not ratepayers, bear these costs. However, we do not believe that Southeastern’s handling of the Russell Project is similar to that of other utilities. Compared to other utilities, the relative dollar amount and the length of time for the deferral of Russell costs from Southeastern’s rates are unique. Note that construction of the Russell Project began in 1976 and the pumping units are still recorded as CWIP today. Thus, Southeastern has not recovered any costs for the nonoperational units. In contrast, IOUs attempt to recover costs immediately, even in situations where the ultimate success of the project is still uncertain. The PMAs state that an abandoned transmission line for Western’s Pacific Northwest-Southwest Intertie Project cannot be declared nonreimbursable or unrecoverable because Western does not have direct legislative authority to do so. As a result, the PMAs assert that Western will include the costs of the abandoned transmission line in rates. This position is contrary to that provided to us during our review. Previously we had not seen any indication that Western planned to include these costs in rates, and all indications were that the costs would be declared nonreimbursable. As stated in this chapter, transmission line construction was discontinued in 1969 and the costs were still included in Western’s financial statements at September 30, 1995. The costs associated with the abandoned line have not been recovered, and no interest has been paid to the Treasury. We estimate that at September 30, 1995, the total unrecovered costs for this abandoned transmission line are about $20.9 million. The PMAs believe that our description of the economic viability of two projects, Washoe and Mead-Phoenix, needs to be clarified. Specifically, the PMAs state that they are reluctant to conclude that projects that are uneconomic today will remain so forever. We agree that project conditions can change over time and that projects experiencing financial problems today, such as Mead-Phoenix, may not face financial problems forever. In addition, we believe that given the increased competition in the wholesale electricity market and wholesale electricity rates that are expected to fall, some projects that are viable today may not be economic in the future. Regarding Washoe, we concur with Western’s assessment in its 1995 annual report that “Based on current conditions it is unlikely the project will be able to generate sufficient revenues to repay the Federal investment.” In addition, we correctly state that the project has been unable to recover all of its O&M and interest expenses and had outstanding deferred payments of $3.9 million as of September 30, 1995. Regarding Mead-Phoenix, we state that a Western official does not expect the project, in its first few months of operation, to generate sufficient revenue to recover all O&M and interest expenses. We believe this fact supports our statement that the project has “questionable financial viability.” The PMAs generally agreed with this section of the chapter, but suggested that we add two points. First, they suggested that more emphasis be placed on the fact that the methodology for cost allocations cannot be changed without congressional approval. We concur with this suggestion and have revised our report accordingly. Second, the PMAs suggested that our report include a statement from our May 1996 testimony that noted that the Pick-Sloan Program incorporates agreements reached decades ago and that any changes to power and irrigation purposes may necessitate reviewing other aspects of the agreements. We have incorporated this statement into our executive summary and chapter 2. The three PMAs receive favorable terms in repaying the appropriated debt that finances capital projects. In addition, the interest rates on outstanding appropriated debt are lower than the cost to the federal government of providing this financing. As a result, a financing subsidy exists because the interest income earned by Treasury on the appropriated debt is less than Treasury’s related interest expense. We estimate that the financing subsidy for the three PMAs for fiscal year 1995 was about $228 million. Cumulatively, this subsidy amounts to several billion dollars. It is important to note that the PMAs were generally following applicable laws and regulations regarding the financing of capital projects. The PMAs have accumulated substantial amounts of appropriated debt at low interest rates. This situation has resulted primarily because the PMAs repay high interest rate debt first and because PMA appropriated debt incurred prior to 1983 was generally at below market interest rates. Historically, a large portion of capital construction projects have been financed with appropriated debt. The three PMAs are responsible for repaying the appropriated debt, which amounted to about $5.4 billion as of September 30, 1995. In addition, as of September 30, 1995, Western was responsible for repaying about $1.5 billion of irrigation-related construction costs (which we refer to as irrigation debt), which is discussed later in this chapter. While the total appropriated debt for the three PMAs has risen over the last 5 years, it has not risen for all of the PMAs. As shown in table 3.1, the appropriated debt balances for Southwestern have declined over the last 5 years. Southeastern’s appropriated debt has remained relatively constant. In contrast, Western’s appropriated debt has increased by $377 million for the same 5-year period. Western’s increase is due primarily to capital spending for new or replacement projects and deferred payments for several projects that resulted in very little or no principal on debt being repaid. Because the power marketed by PMAs is generated at hydroelectric dams, the amount of power available for them to sell is greatly dependent on weather conditions. During years in which precipitation is high, reservoir levels are sufficient to generate large quantities of electricity. In drought years, however, reservoir levels are reduced and there is less electricity generated and available for sale by the PMAs. The Flood Control Act of 1944 provides that appropriated debt must be repaid within “a reasonable period of years,” but it does not specify that any principal on outstanding debt be repaid in any particular year. The Department of Energy’s (DOE) interpretation of this law, Order RA 6120.2, specifies that, unless otherwise prescribed by law, each federal dollar spent on a capital project is to be repaid with interest within 50 years. Shorter repayment periods are used for replacements and transmission facilities. DOE’s Order RA 6120.2 also requires that PMAs, to the extent possible, repay the highest interest bearing appropriated debt first. Appropriated debt carries a fixed interest rate with no ability of Treasury to call the debt. Although PMAs are generally required to pay off highest interest debt first, they cannot refinance the debt. Thus, Treasury bears the risk of increases in interest rates and PMAs, to some degree, bear the risk of decreases in interest rates. Western, for example, has some appropriated debt that is at interest rates above the current Treasury 30-year bond rate. However, because Western cannot refinance this debt and does not have sufficient cash flow to pay it off, it must pay the above-market interest rates. From the inception of the PMAs until 1983, the interest rates paid by PMAs on appropriated debt were either established administratively or by specific legislation authorizing and funding the dam construction. The interest rates specified in legislation were generally 2.5 percent to 3.125 percent. Treasury borrowing rates were based on market conditions. As shown in figure 3.1, when appropriated debt was incurred in the 1950s, the average Treasury interest rate and statutory rates were about the same; however, beginning in the 1960s, the difference between the interest rates paid on the PMAs’ outstanding appropriated debt and the average interest rate Treasury paid on its outstanding bond portfolio in the same years started to grow. Because repayment terms on appropriated debt are up to 50 years, this pre-1983 below market interest debt could remain outstanding for several more decades. By 1985, the average interest rate on Treasury’s outstanding bonds had increased to about 11.02 percent, while the average interest rate on the PMAs’ outstanding appropriated debt was between 2.8 and 3.1 percent. Figure 3.1 also shows the large difference between PMAs in average interest rates on outstanding appropriated debt and the impact of the higher interest rates required after 1983. As of September 30, 1995, Southwestern’s average interest rate on appropriated debt was 2.9 percent, compared to 4.4 percent for Southeastern and 5.5 percent for Western. Southwestern has had strong water years, and its cash flow has allowed repayment of most new appropriated debt, while the low interest debt remains unpaid. According to Southwestern, part of the reason for the strong cash flow is the inclusion in rates of a provision for future capital replacements, which causes rates to be 10 percent higher than necessary to cover current expenses. As of September 30, 1995, only about $45 million of Southwestern’s outstanding appropriated debt of $686 million was financed at interest rates above 3.125 percent. The weighted-average interest rate paid by Southeastern rose from about 2.7 percent in the early 1980s to about 4.4 percent as of September 30, 1995. The increase in average interest rates reflects Southeastern’s inability, due to drought conditions and resulting low revenues, to pay off all the appropriated debt associated with more recent, higher interest rate additions to the power system. In addition, the 6.125 percent interest rate associated with the Russell Project contributed to Southeastern’s average interest rate increase. Western’s average interest rate has risen due to increased market interest appropriated debt resulting from post-1983 construction projects. In addition, according to Western, drought conditions have been the primary reason O&M and interest expenses have been deferred. As a result, Western’s cash flow has not been sufficient to pay off higher interest appropriated debt. The historically low interest rates and flexible repayment terms for PMAs result in a financing subsidy because the interest rates paid by the PMAs do not fully recover the federal government’s cost of funds. (See figure 3.1.) To estimate the financing subsidy, we compared Treasury’s average interest rate on bonds outstanding, which was about 9.1 percent for fiscal year 1995, to the interest rates on the PMAs’ debt as of the end of fiscal year 1995. In this analysis, we used the average interest rate on all Treasury bonds outstanding. The Treasury Bond portfolio includes components with various terms up to 30 years. Since Treasury does not match its borrowing with individual program financing, the average interest rate on Treasury’s entire bond portfolio best reflects its cost of funds. See appendix I for a discussion of our methodology for calculating this financing subsidy. As shown in table 3.2, the estimated financing subsidy using Treasury’s average interest rate on bonds outstanding for fiscal year 1995 was about $228 million. The above estimate shows that Treasury is currently paying a higher interest rate on its outstanding debt than PMAs are paying on their outstanding appropriated debt. Over the next several decades, as the pre-1983 appropriated debt is repaid, the PMAs’ financing subsidy should decrease. However, as shown in figure 3.1, despite new borrowing at market rates, the PMAs’ ability to repay high interest debt first has been a factor and likely will continue to contribute to PMA average interest rates being below the effective Treasury average interest rate. In addition, Treasury’s inflexible borrowing practices contribute to the magnitude of the financing subsidy. Treasury’s general inability to refinance or prepay the federal government’s outstanding debt in times of falling or low interest rates is part of the reason for its relatively high 9.1 percent average cost of funds for fiscal year 1995. We estimate that, cumulatively, the financing subsidy for the three PMAs is several billion dollars. This estimate is based on the spread between Treasury and PMA interest rates shown in figure 3.1, which, to varying degrees, has existed for over 30 years. In 1983, the Department of Energy increased the interest rates at which new projects or replacements to old projects would be financed by modifying its Order RA 6120.2. This modification required that, in the absence of specific legislation to the contrary, new projects, additions, and equipment replacements made after September 30, 1983, be financed at interest rates equal to the average yield during the preceding fiscal year on interest-bearing marketable securities of the United States, which, at the time the computation is made, have terms of 15 years or more remaining to maturity. As shown in figure 3.2, our review showed that, after 1983, new capital projects or replacements that were debt-financed had interest rates that track closely with Treasury rates. The new interest rates did not apply to projects that were already under construction. For example, the Russell project, on which construction started in 1975, continued to capitalize interest at the rate applicable in 1975, 6.125 percent. Projects continue to carry the interest rate in effect at the time the projects are started, regardless of when the borrowing occurred. As a result, Treasury’s cost of funds could either be greater or less than the project rate depending on whether interest rates are falling or rising. In 1985, the year the first electric generating unit became commercially available at the Russell project, the interest cost borne by Treasury was nearly 10.8 percent, significantly higher than the rate of the interest associated with Russell. Since the rates the PMAs pay for new appropriated debt are based on the average of Treasury issues in the prior year, during times of falling interest rates, PMAs will usually pay interest on new appropriated debt at rates above current Treasury rates. Conversely, during times of rising interest rates, PMAs will pay interest on new appropriated debt at rates below current Treasury rates. As shown in figure 3.1, despite new borrowing at market rates, it is the PMAs’ ability to repay high interest debt first that has kept and likely will continue to keep their average interest rates below those of Treasury. However, over time, as the pre-1983 appropriated debt is repaid, the PMAs’ financing subsidy should eventually decrease. In addition to appropriated debt, Western is responsible for repaying certain irrigation-related construction costs on completed irrigation facilities (which we refer to as irrigation debt). As previously noted, reclamation law provides for irrigation assistance to be recovered primarily by power revenues. Although irrigation debt is scheduled to be recovered with power revenues, Western does not view irrigation debt as a PMA cost. Therefore, when Western repays these amounts, neither the costs, nor the related revenues, are reflected in Western’s financial statements. As of September 30, 1995, according to Western, it had approximately $1.5 billion of outstanding irrigation debt, which is to be repaid without interest. The repayment period for the irrigation debt could be up to 60 years after completion of construction—up to a 10-year development period plus a 50-year repayment period. Because DOE’s repayment policies require PMAs to repay their highest interest rate debt first (unless lower interest-bearing debt is at the end of its repayment period, in which case it would be paid first), the irrigation debt, at zero percent interest, will generally not be repaid until the end of its repayment period. As of September 30, 1995, according to Western, about $32 million of the total $1.5 billion of irrigation debt had been recovered through electricity rates. To the extent irrigation debt is repaid through electricity rates, power users are subsidizing irrigators. In addition to the long period allowed for repayment of irrigation debt, completed irrigation facilities were under construction for periods ranging from 1 to 27 years, with an average construction period of about 8 years. Therefore, the irrigation debt may not be repaid, on average, until approximately 68 years after the initial costs were incurred. Using the average interest rate on Treasury bonds outstanding for 1995 of 9.1 percent, we estimate that in 1995 the cost to Treasury of Western’s $1.5 billion of irrigation debt was $137 million. This irrigation debt continues to increase at the Pick-Sloan and other projects due to capital improvements allocated to completed irrigation facilities that are to be repaid by Western. To illustrate the future cost to the federal government of new irrigation debt, we calculated the present value of this new debt, assuming it would be repaid at zero percent interest at the end of the average 68 years that the debt would most likely be outstanding. By applying a discount rate of 7 percent, which approximates Treasury’s current 30-year bond rate, we estimate that the present value of each dollar that will be repaid 68 years from today is less than one penny. In commenting on a draft of this report, the PMAs stated that they agree that certain unpaid investments (appropriated debt) are charged an interest expense that is less than the Treasury’s cost of borrowing at the time the investment was made. However, the PMAs expressed great concern with our methodology for measuring the magnitude of Treasury’s unrecovered financing costs and, as a result, do not concur with our estimate of the magnitude of this cost. The PMAs believe our approach is invalid and is equivalent to assuming that the PMAs refinance their appropriated debt on an annual basis. The PMAs believe that a more accurate methodology for determining the magnitude of the unrecovered financing cost would be to compare each investment’s fixed interest rate against Treasury’s cost of borrowing in the year the investment was placed in service. Thus, they propose calculating the 1995 financing difference by comparing the Treasury’s cost of funds in the year of the PMA investment to the actual PMA interest rate on that investment. As stated in this chapter, we believe that there is a financing subsidy on the PMAs’ appropriated debt because the interest rates the PMAs pay do not fully recover the federal government’s cost of funds. We characterize this situation as a financing subsidy because there is a net cost to the federal government of providing the PMAs with appropriated debt. We do not believe the methodology proposed by the PMAs captures the full amount of this subsidy because it does not consider the impact of the PMAs’ flexible repayment terms or, as discussed below, the impact of Treasury’s borrowing practices. As discussed in appendix I, the methodology described by the PMAs would be a more accurate means to calculate the portion of the subsidy related to the below market financing. However, the records were not available at Western to make the type of specific calculation the PMAs proposed. We calculated the 1995 estimated financing subsidy by taking the difference between the PMAs’ weighted average interest rate for 1995 and the Treasury’s average interest rate on its entire bond portfolio. Since Treasury borrows for the needs of the entire federal government using short-term and long-term financing, and does not match specific borrowings with the PMAs’ appropriated debt financing, the average interest rate on Treasury’s entire bond portfolio best reflects its cost of funds. We believe our approach reasonably captures both the impact of the below market financing provided the PMAs prior to 1983 and the flexible repayment terms currently afforded the PMAs under DOE policies. To help ensure that our methodology was reasonable, we spoke to representatives of OMB, Treasury, and the Congressional Budget Office. The PMAs disagree with our assertion that the Treasury’s additional cost is caused, in part, by the DOE policy of allowing the PMAs to pay off the highest interest rate debt first. The PMAs believe that as long as the interest rate assigned to each PMA borrowing reflects the Treasury’s cost of borrowing at the time, then Treasury is kept whole and no additional cost is incurred. We disagree. Treasury is not “kept whole” because Treasury’s borrowing practices are inflexible in that it is generally unable to refinance or prepay outstanding debt in times of falling interest rates. This inflexibility is part of the reason for Treasury’s relatively high 9.1 percent average cost of funds. Because of the PMAs’ flexibility, and the Treasury’s inflexibility, there are, and likely always will be, differences in the cost of funds. In summary, we continue to believe that the PMAs’ ability to pay off the highest interest rate appropriated debt first, and at any time they desire within the repayment terms of up to 50 years, results in a financing subsidy. PMAs market low cost wholesale electricity. PMAs’ average revenue per kilowatthour (kWh) for wholesale sales has historically been substantially lower than average revenue per kWh for nonfederal utilities. Some of the difference in average revenue per kWh is attributable to the PMAs’ unrecovered power-related costs (see chapter 2) and federally subsidized debt financing. (See chapter 3.) Inherent advantages PMAs have compared to other utilities contribute to lower power production costs and lower average revenue per kWh. One such advantage is that PMAs market primarily low-cost hydropower while other utilities generally must rely on more expensive coal and nuclear plants to generate electricity. Another advantage is that PMAs, as federal agencies, do not, for the most part, pay taxes. PMAs are required to recover several nonpower costs, which is a disadvantage compared to other utilities. Competition in the wholesale electricity market could impact the PMAs’ position as marketers of low cost electricity. As shown in figure 4.1, in 1994 the PMAs’ average revenue per kWh was more than 40 percent lower than IOUs and publicly owned generating utilities (POGs) in the primary North American Electric Reliability Council(NERC) regions in which the PMAs operate. According to the Energy Information Administration, in 1994 the nationwide average revenue per kWh was 3.5 cents for IOUs and 3.9 cents for POGs. The PMAs’ average revenue per kWh in 1994, by rate-setting system, ranged from a low of 0.66 cents per kWh for Southwestern’s Robert D. Willis system to a high of 3.09 cents per kWh for Southeastern’s Georgia-Alabama-South Carolina system. We also reviewed each PMA’s average revenue per kWh compared to national averages for IOUs and POGs from 1990 through 1993. During that period, the PMAs’ average revenue per kWh was consistently at least 40 percent less than those of IOUs and POGs. A detailed comparison of PMA, POG, and IOU average revenue per kWh for 1990 through 1994 and a comparison of each PMA’s average revenue per kWh by rate-setting system to IOUs and POGs in the applicable NERC regions for 1994 is provided in appendix V. We have provided these comparisons by rate-setting system because each PMA system and corresponding NERC region has different average revenue per kWh. These average revenues per kWh may vary considerably by rate-setting system due to customer mix, contractual arrangements, and regional environmental factors such as streamflow and wildlife. In 1994, Southwestern’s average revenue per kWh was the lowest of the three PMAs. The PMAs’ average revenue per kWh, which is generally reflective of power production costs, differs for a number of reasons, such as average interest rates, streamflow, and the operating efficiency of the hydroelectric plants. As discussed in chapter 3, Southwestern has significantly lower average interest rates than the other PMAs. In addition, Southwestern had above average streamflow in 1994 and other recent years. Western, in contrast, has had deferred payments in the 1990s primarily due to drought conditions. A potential reason for higher average revenue per kWh for Southeastern is the operating condition of hydroelectric plants that generate the power that it markets. We recently reported that the Corps’ hydroelectric plants in the Southeast have experienced lengthy outages resulting in declines in reliability and availability of power. We did not review the hydroelectric plants that generate the power marketed by Southwestern and Western to determine if similar operating problems exist. According to the American Public Power Association (APPA), POGs’ average revenue per kWh were higher than IOUs’ average revenue per kWh for several reasons. First, POGs sell a higher percentage of wholesale power under firm power contracts, which command higher prices than nonfirm sales. Second, the timing of many POGs’ construction of coal and nuclear generating facilities, in the late 1970s and early 1980s, coincided with new environmental regulations with which previously built facilities were not required to comply. This is in contrast to many IOUs that built coal plants before the 1970s. Also, POGs often do not have enough of their own generating capacity to meet customer needs and thus purchase power from IOUs. There are some limitations to our comparison of average revenue per kWh. The most recent industry data we could obtain was 1994. Since that time, competition has increased and may have reduced the average revenue per kWh. In addition, we did not include independent power producers (IPPs) in our comparison because similar information was not readily available. IPPs supply a small percentage of the total market (8 percent) with electricity; however, IPPs are providing a large portion of the new capacity with low cost, natural gas-fired turbines, which is driving wholesale electric rates down. IPPs could pose a significant competitive threat to the PMAs. Despite these limitations, we believe that our comparison of the average revenue per kWh is a strong indicator of the relative power production cost and overall competitive position of the PMAs compared to other utilities. Most of the PMAs’ 17 different rate-setting systems appear to be in a strong competitive position compared to POGs and IOUs in their areas. However, several systems have high or increasing production costs. Increasing competition in the utility industry may limit their ability to raise rates. One of these systems, the Washoe Project, is not viable under existing operating conditions. Western is selling electricity from this project for 1.9 cents per kWh that is costing 11 cents per kWh to produce. Other projects, such as Pick-Sloan, face mounting pressure to continue to increase rates. Pick-Sloan had outstanding deferred payments of $131 million as of September 30, 1995. To recover deferred payments and potentially recover irrigation debt, Pick-Sloan faces upward rate pressure. Competition could make it difficult for this project to recover its substantial irrigation debt. Although low cost now, potential rate increases at Pick-Sloan could affect its future competitive position. Another project, the Central Valley Project (CVP), has started to feel the effects of competition and has acted to improve its position. Much of the CVP power that Western sells is purchased from nonfederal sources at prices established in long-term contracts. CVP “passes through” the costs of purchasing this power to its customers; no profit is made. In fiscal year 1995, CVP purchased less power for its customers than in fiscal year 1994 for a variety of reasons. According to CVP officials, one of the reasons for this was that its customers were able to obtain needed power from other sources at a lower price than the price CVP had established in its contracts. CVP officials told us that they expect this trend to continue and have begun to terminate the contracts they hold to purchase power—a process which they expect to continue over the next several years. The rates that CVP charges for firm power are composite; that is, they incorporate the cost of both CVP-purchased and CVP-generated power. CVP’s average revenue per kWh is the highest when compared to other projects where Western markets power. One reason for this is the inclusion in rates of the relatively expensive CVP-purchased power. Since CVP’s repayment study projects the purchase of less and less power in coming years, the consequence could be lower rates. Except for the Georgia-Alabama-South Carolina system, it appears that Southeastern’s rate-setting systems are in a relatively strong competitive position. As discussed in chapter 2, if the inactive portion of the Russell Project is brought on line, according to Southeastern officials, it would likely cause an increase in rates for the Georgia-Alabama-South Carolina system because of the $488 million invested in this portion of the project. As shown in appendix V, the average revenue per kWh at this system—3.09 cents per kWh—is the highest for all three PMAs. Southwestern is in a very strong competitive position in all of its rate-setting systems. As shown in appendix V, there are substantial differences in the average revenue per kWh of Southwestern’s rate-setting systems and the average revenue per kWh of the IOUs and POGs in the NERC regions in which Southwestern markets power. As discussed earlier, the impact of competition in the wholesale electricity market, and the increasing impact of low cost IPP electricity, could affect the PMAs’ competitive position. PMAs sell primarily wholesale power generated at federal water projects. The Flood Control Act of 1944 calls for the PMAs to encourage the most widespread use of electricity at the lowest possible rates to consumers. The PMAs do not sell power for profit. IOUs generally provide a defined service area with power and build new generating capacity to meet future customer needs. Both wholesale and retail electricity is sold by IOUs. The objective of IOUs is to produce a return for their shareholders. POGs are similar to PMAs in that they are owned and/or operated by governmental entities—federal, state, or local. They are nonprofit entities established to serve their communities and nearby consumers at cost. POGs sell both wholesale and retail electricity. Key operating and financial differences exist between PMAs and other utilities. Many of these differences, including the PMAs’ reliance on hydropower, other utilities’ need to pay various taxes, accounting and rate-setting practices, and financing, result in advantages to the PMAs and contribute to the substantial difference in power production costs. In this section, we compare key operating and financial factors of PMAs to IOUs and POGs. We selected two IOUs and two POGs from each of the PMAs’ service areas. In order to be selected, each utility had to generate at least some hydroelectricity. We contacted APPA and the Edison Electric Institute (EEI) to corroborate our findings from the individual utilities. For a description of the methodology for our comparison, see appendix I. PMAs rely almost entirely on hydroelectric power while other utilities are primarily dependent on coal and nuclear generating plants. Table 4.1 shows the large contrast in percent of power coming from various generating sources used by the PMAs and other utilities. According to APPA, POGs on average generated 26 percent of their electricity from hydroelectric plants in 1994. EEI reported that IOUs generated an average of 4 percent of electricity from hydroelectric plants between 1990 and 1994. The hydroelectric plants that generate the power marketed by the PMAs have several key cost advantages over coal and nuclear plants that contribute to lower power production costs, including relatively low capital construction costs and no fuel costs. To show the relatively low capital cost of these hydroelectric plants, we compared the investment in utility plant per megawatt of capacity for these plants to those of other utilities. As shown in figure 4.2, Southeastern, Southwestern, and Western have substantially less invested in power plants than other utilities, which contributes to their lower power production costs. Note that Southeastern’s investment in utility plant per megawatt is substantially higher than the other PMAs. This is because the Russell project, which is discussed in chapter 2, has incurred construction costs of $488 million with no corresponding generating capacity. Compared to other utilities, the lower investment in PMA-related hydroelectric plants is partly the result of construction of these plants 30 to 60 years ago, at lower costs compared to more recent construction. Unlike the PMAs and operating agencies, IOUs build new capacity to meet the future needs of customers. The higher construction costs for the other utilities shown in figure 4.2 reflects more recent construction of coal and nuclear plants. Many IOU and POG nuclear plants that were completed and are operating had significant capital construction costs, which is at least partly due to stringent Nuclear Regulatory Commission (NRC) regulations. Utilities with coal plants must comply with the Clean Air Act, which requires significant investments in pollution control equipment for many plants. The PMAs’ relatively low investment in utility plant results in a large cost advantage. Our analysis excluded nuclear plants that are mothballedand thus provide no capacity while resulting in significant capital costs. Inclusion of these “regulatory assets” would have increased the POG and IOU investment. Appendix I describes the methodology used for computing the ratios in figure 4.2. Another major reason that hydroelectric plants result in lower power production costs is the cost of fuel. This is particularly important when comparing hydro plants to coal plants. The cost of coal is a major operating expense for most other utilities. Nuclear fuel is also a significant cost, although not nearly as large a factor as coal. In 1994, POGs’ fuel costs represented 15 percent of operating revenues, while IOUs’ fuel costs represented 17 percent of operating revenue. The PMAs, on the other hand, have the benefit of marketing power from hydroelectric plants, which do not have an associated fuel cost. The PMAs do have certain costs of operations resulting from hydroelectric production that differ from coal and nuclear generation. According to Southwestern, the Corps is subject to federal regulations, such as the Endangered Species Act and the National Environmental Policy Act. Southwestern, through the Corps’ operations, estimates that it lost about $1.3 million in revenues over the past 5 years through water spilled and operations changed to improve water quality for downstream recreational fisheries. Southwestern also estimates that it has spent nearly $500,000 on equipment, studies, and services in an effort to find solutions to the water quality/sport fisheries problem. Southeastern and Western face similar issues related to the Corps and Bureau operations of their respective hydroelectric facilities. It is important to note here that capital and O&M costs relating to nonpower uses of federal dams, including flood control, navigation, and recreation, are allocated to those other purposes and not included in PMA electricity rates. As discussed in chapter 1, on average, the cost allocations to power are 69 percent, 35 percent, and 50 percent for projects related to Southeastern, Southwestern, and Western, respectively. POGs and IOUs face similar regulations in running hydroelectric dams. The utilities we contacted reported to us that they need to comply with numerous laws including the Federal Power Act, Federal Water Pollution Control Act, Clean Water Act, and the Endangered Species Act. In addition, these utilities are subject to regulations of government agencies such as FERC, the Forest Service, and other state and local governmental agencies. The operations of hydropower projects at the utilities we contacted are greatly affected by these laws and regulations. In fact, several utilities reported to us that the laws and regulations make certain new hydroelectric projects economically infeasible. As with Southwestern, one of the POGs reported that it is required to spill water, which results in over $1 million per year in lost revenues. Some of the utilities reported that they recover a portion of O&M costs for recreational services and facilities; however, for the most part, the capital and O&M costs incurred in complying with laws and regulations are recovered through electricity rate charges. PMAs, as federal entities, are generally not subject to taxes, which gives them a substantial power production cost advantage over POGs and IOUs. POGs, as publicly owned utilities, typically do not pay income taxes because they are a unit of state or local government. However, many POGs do make payments in lieu of taxes to local governments. IOUs are subject to several forms of taxation. Such taxes include all the general taxation rules in the federal tax laws as well as a variety of state and local taxes, such as income tax, gross receipts tax, franchise tax, and property tax. With the exception of the Boulder Canyon Project, the PMAs generally do not make payments in lieu of taxes to state or local governments. The Boulder Canyon Project Adjustment Act of 1940 requires annual payments to the states of Arizona and Nevada. In 1995, the project paid $600,000, or 1.2 percent of operating revenues to these states. According to EEI, in 1994, IOUs, on average, paid taxes totaling about 14 percent of operating revenue. This average varies significantly by state and utility due to differing state and local government taxation laws and various levels of IOU profitability. The IOUs we contacted pay taxes ranging from 11 percent to 20 percent of operating revenue. Examples of taxes paid by the IOUs we contacted are federal and state income tax, real and personal property tax, corporate franchise tax, invested capital tax, and municipal license tax. POGs are exempt from paying federal or state income taxes. However, most POGs we contacted make a contribution to one or more local governmental entities, generally in lieu of property taxes. APPA conducted a survey and found that 77 percent of the respondents made contributions to local governmental entities; 74 percent of those contributions were payments in lieu of taxes. POGs also contribute free or reduced cost electrical service, the use of employees, and other services such as the use of vehicles, equipment, and materials to local governments. A study of 670 public distribution utilities showed that the median net payments and contributions as a percent of electric operating revenue were 5.8 percent. The range of net payments as a percentage of operating revenue for the POGs we contacted varied from 0 to 17 percent. PMAs are agencies of the Department of Energy and thus are required to follow standards recommended by the Federal Accounting Standards Advisory Board (FASAB) and approved by GAO, OMB, and Treasury. Certain FASAB standards directly address accounting requirements for the PMAs. For example, as discussed in chapter 2, SFFAS no. 5 prescribes accounting principles the PMAs will be required to follow for recording the full cost of pension and postretirement health benefits. Because FASAB standards and other relevant federal guidelines do not specifically address regulated entities, the PMAs are allowed to follow the provisions of Statement of Financial Accounting Standards no. 71, Accounting for the Effects of Certain Types of Regulation (SFAS 71). The provisions of SFAS no. 71 require, among other things, that the financial statements of a utility reflect the economic effects of rate regulation and provide for a relevant matching of revenues and expenses. Regulatory actions can provide reasonable assurance of the existence of an asset, reduce or eliminate the value of an asset, or impose a liability on the regulated enterprise. For example, if a regulator determined that the costs of a nonproducing power plant were allowable, then the costs of the plant would be carried as a “regulatory asset” and reflected in rates. In contrast, if the costs were determined to be unallowable, the asset would be written off with no corresponding rate charge. IOUs are subject to the pronouncements of FASB and thus prepare financial statements in accordance with SFAS 71. POGs are subject to the pronouncements of the Governmental Accounting Standards Board (GASB). GASB Statement 20, Accounting and Financial Reporting for Proprietary Funds and Other Governmental Entities That Use Proprietary Fund Accounting, states that if GASB has not addressed an issue, then an entity may follow FASB guidance. POGs generally prepare financial statements in accordance with SFAS 71 since GASB has not addressed regulatory accounting for governmental entities. IOUs typically use the accrual basis (as modified by SFAS 71) to determine costs to be recovered through electricity rates, using depreciation to recover capital costs. Depreciation as a basis for recovery of capital costs provides a consistent, systematic method on which to base rates by recognizing the cost of the asset equally over its useful life. PMAs and POGs generally use a cash basis or debt service method of setting rates. Under this method, capital costs are recovered through rates as payments for the asset are made. For example, if a capital asset is debt financed, the cost would be included in rates when principal on the debt is repaid or scheduled to be repaid. Repayment terms between PMAs and POGs differ. POGs generally repay principal on debt in fixed annual or semiannual installments, whereas most PMA debt has flexible repayment terms and as such is not required to be repaid until the final year. Rate recovery terms for the various types of utilities vary. Depreciable lives of hydroelectric assets for the IOUs we contacted range from 22 years to 96 years, with most asset types exceeding 40 years. POGs’ tax-exempt bonds are generally repaid over 18 to 40 years. PMAs have 50 years to repay federal appropriations for hydro assets. Therefore, even though the PMAs have flexible repayment terms, in some cases, their costs may ultimately be recovered sooner than the IOUs overall. The financial statements of the PMAs and POGs are presented on an accrual basis in accordance with SFAS 71. The financial reporting difference created by setting rates on a cash basis and reporting on the accrual basis is recognized in the Federal Investment (Equity) section of the PMA financial statements as accumulated net revenues. POGs generally eliminate a mismatch of income between cash basis rate-setting and accrual basis financial statements by recording an asset (liability) on the balance sheet with an offsetting credit (debit) to the income statement. There are differences among IOUs, POGs, and PMAs regarding the types of expenses included in power production costs and resultant rates. The types of expenses included in wholesale and retail rates are subject to approval by utility commissions and may be determined by legislation as well as accounting practices. We found that IOUs typically include all expenses in retail rates unless disallowed by a utility commission. If the utility commission deems that certain expenses do not benefit ratepayers, they will prohibit such expenses from being included in retail rates. For example, one state utility commission decided that advertising expenses, membership dues, lobbying fees, and nonutility operation expenses do not benefit ratepayers and therefore were not allowed to be recovered through retail rates. However, these costs are often recovered fully through wholesale rates because FERC generally allows such costs. An example of costs that FERC may disallow from wholesale rates is a portion of CWIP if FERC determines that the IOU has requested an unreasonable amount to be included in rates. Most POGs we contacted include all of their expenses in rates. PMAs’ rates, on the other hand, do not include some costs, as discussed in chapter 2. However, PMAs are required to recover certain nonpower costs. For example, Western is required to recover the Hoover Dam Visitor Center costs, which are estimated at about $124 million. In addition, Western is required to repay about $1.5 billion of capital costs related to assistance on completed irrigation facilities (irrigation debt). According to FERC, often an IOU will determine within the first 3 years of construction that a project is not viable and halt construction so as to minimize expenses which will not provide benefit to ratepayers. Normally if an IOU halts construction on a project, it will pass these costs through to the ratepayers. A customer may challenge the inclusion of such costs in rates with the appropriate utility commission. The commission may then conduct a prudency test which serves as the basis for allowing such costs in rates. The purpose of the prudency test is to determine whether it was prudent to build the project at the time construction began. If so, then the cost of the abandoned project would be fully included in the rate base. Even if the project does not meet the prudency test, according to FERC, the ratepayers would still be responsible for some portion of the costs and shareholders would be responsible for the remainder of the costs. PMAs are not subject to FERC’s prudency test. PMAs, because of DOE Order RA 6120.2, do not include project costs in rates until put into commercial service. The Russell Project, although not yet operational but determined viable according to Southeastern, was in construction for 16 years and has been awaiting commercial operation for the last 4 years. As such, costs related to the Russell Project totaling $488 million, including accumulated interest, are still in CWIP and excluded from rates. Compared to other utilities, the relative magnitude and length of time for Southeastern’s deferral of Russell from its rates is unique. IOUs’ and POGs’ basic rate-setting methods also differ from PMAs. IOUs and POGs generally use a revenue requirements study. For IOUs, the revenue requirement is the amount of money the utility requires to cover its annual expenses while earning a reasonable rate of return for its investors. POGs follow similar methods but do not require a rate of return since they are publicly owned, although some may include an allowance to provide equity capital for the system. Power repayment studies are prepared annually by the PMAs to determine the adequacy of current rates and determine new rates. The power repayment study tests the adequacy of rates; it entails a 5-year cost evaluation period and recovery of costs within their legally permitted repayment periods. The study also forecasts power-related capital and O&M costs that the PMA will be required to repay in the future and projects future revenues based on current rates. If the study shows that revenues generated under current rates will be inadequate to cover expenses, new rates may be designed. Most of the unrecovered costs identified in chapter 2 are not included in the study and, therefore, are not included in the determination of rates. The methods and costs of capital financing vary greatly among the PMAs, POGs, and IOUs. Federal power-related capital projects rely primarily on debt financing from Treasury. This financing is dependent on the appropriations process, discussed in chapter 1. POGs rely primarily on debt financing from the capital market for capital projects. In addition to debt financing, IOUs are able to use equity financing. PMAs have substantial balances of appropriated debt that have been used to finance the construction of hydroelectric and transmission facilities. As discussed in chapter 3, because of several factors, PMA interest rates on appropriated debt have been subsidized by the federal government. POGs and IOUs also issue debt to finance capital projects. POGs and IOUs typically go to the financial markets to issue various short-term and long-term debt instruments. POGs generally issue bonds that are exempt from federal and state income taxes. This results in POGs getting favorable interest rates on their debt. IOUs issue long-term debt and some short-term instruments, such as commercial paper. IOU interest rates are based on market forces and typically vary based on the bond ratings of the particular IOU. Unlike PMAs, IOUs and POGs have the flexibility to refinance debt in times of falling interest rates. However, as discussed previously, PMAs have the ability to repay higher interest rate debt first, thereby allowing them to effectively manage their debt costs. According to EIA, the average interest rate for 1994 for all POGs was 5.6 percent. For IOUs, it was 7.3 percent. The average interest rates of the POGs and IOUs we contacted for 1995 were in the same range as for the entire industry in 1994. For the POGs, the low was 5.1 percent and the high was 6.1 percent. The IOUs’ range was 6.5 percent to 7.9 percent. In 1995, the PMAs’ average interest rates ranged from 2.9 percent for Southwestern to 5.5 percent for Western. The Bureau has obtained financing for several capital projects from Western’s customers, which we will refer to as “third-party financing.” The Bureau has the authority to accept contributions from Western’s customers to defray the costs of capital construction. As of September 30, 1995, outstanding third-party financing, or customer advances, amounted to about $154 million for the Hoover Dam capital improvement (uprating) program (Boulder Canyon Power System) and about $25 million for the Buffalo Bill project (Pick-Sloan Missouri Basin Power System). The interest rates for the Hoover Dam uprating program range from 5.5 percent to 8.2 percent, and the interest rate for the Buffalo Bill project is 11.07 percent. Under third-party financing arrangements, Western customers provide funding (primarily from the issuance of bonds) to the Bureau to use for the capital project. The customers pay the debt service cost, and Western records the proceeds as a liability and records interest expense. Western then bills the customers for the production costs of electricity, including the debt service on the third-party financing, and credits the customers for the debt service costs. Essentially, this arrangement results in customers directly paying for capital improvements rather than paying for them indirectly through rates. Unlike the Russell Project, which was financed with appropriated debt, third-party financing shifts many of the risks of construction projects to the customers, who are responsible for the bonds, rather than the federal government. In addition to debt financing, federal power-related capital projects are financed using a method similar to revenue financing. Revenue financing is paying for capital projects with net cash generated from operations. Revenue financing for the PMAs occurs when power revenues exceed O&M expenses and the resulting net revenue is used to pay off appropriated debt on new projects or replacements in the first year of the repayment period. In effect, the capital appropriation is repaid in the year that it was made with revenue from current power customers. Southwestern, for example, has been able to keep its average interest rate at 2.9 percent by revenue financing its new projects that would have been financed at DOE policy rates. POGs and IOUs also use revenue financing for capital projects. To the extent a utility is able to finance capital projects from net cash flow rather than debt it will reduce future interest expense. In addition to revenue and debt financing, IOUs have access to equity financing. IOUs are able to issue common and preferred stock and typically pay a large portion of earnings out in common dividends. In 1994 the IOU payout ratio was 80 percent. Dividends represent a financing cost for IOUs. As discussed in chapter 1, PMAs’ appropriated debt generally has terms of 50 years for generating projects and 35 to 45 years for transmission investments. Most of the PMA debt follows a “balloon payment methodology,” in which principal is due at the end of the repayment period with no required annual amortization. This differs from the IOUs we contacted, who reported maximum maturities on debt of 30 to 40 years. IOUs reported that they generally pay principal off in balloon payments at maturity, either through cash flow from operations or refinancing. POGs reported maximum maturities of 18 to 40 years; however, the POGs generally repay principal in fixed amounts each year. As discussed in the rate-setting section, inclusion of capital costs in rates for PMAs and other utilities varies from the cash (debt service) to the accrual basis. We noted several other differences in financing, including control of capital expenditures and placement costs. The PMAs and operating agencies face the constraints of federal budget pressures in obtaining capital financing. According to the Corps, the focus on the federal deficit has put pressure on PMA and operating agency budgets and has resulted in less funding for PMAs and operating agencies for hydropower capital programs. POGs and IOUs have more direct control over capital budgets. However, POGs and IOUs are thus subject to the scrutiny of the market, such as the bond rating system, which affects the appeal of the bonds to the investing public. IOU financing is also subject to the scrutiny of regulators. The PMAs, as federal agencies who are appropriated capital funds, do not pay any placement costs or transaction fees. In contrast, POGs and IOUs must pay placement costs. The POGs and IOUs we contacted reported placement costs from .09 percent of the face value of the debt offering up to 1.5 percent. In addition, IOUs reported placement costs on common and preferred equity offerings of about 3 percent. When compared to IOUs, PMAs and POGs are generally more highly leveraged. Figure 4.3 shows that the PMAs and POGs rely heavily on debt financing for capital projects. The PMAs’ and POGs’ ratios of long-term debt as a percentage of total assets are much higher than IOUs because PMAs and POGs finance most of their capital expenditures with debt rather than equity or revenue. IOUs may utilize a combination of debt, equity, and revenue financing which results in lower leverage. However, IOUs’ also pay dividends to stockholders which are, in essence, a financing cost. This cost is not a factor in the calculation of interest on long-term debt to operating revenue in figure 4.3. If IOUs’ common dividends were included in this calculation, then an average of 15 percent of IOUs’ operating revenue would be paid for financing costs. There is an expected correlation between long-term debt to total assets and interest on long-term debt to operating revenue for each of the entities. Those utilities that utilize debt to a greater extent to finance capital expenditures have greater interest expense relative to operating revenue. The PMA ratio of interest on long-term debt to operating revenue would be much higher if interest rates were not subsidized by the federal government, as discussed in chapter 3. The ratio shown for Southeastern is higher than the other PMAs because of the Russell Project, which is incurring capitalized interest but generating no revenue. Southwestern’s ratio is only 18 percent because of its low average interest rate of 2.9 percent. In commenting on a draft of this report, the PMAs stated that they are not truly comparable to other utilities because they have unique characteristics that make certain comparisons against other utilities of limited value. The PMAs stated, for example, that unlike “traditional utilities,” they do not have a responsibility to meet load growth in their regions or the authority to acquire new firm power resources. The PMAs stated that it is inappropriate to compare their hydropower costs to coal and nuclear generation of other utilities. We agree with the PMAs that they are different from other utilities in the ways discussed in this chapter, including cost of production, types of generating facilities, payment of taxes, accounting and rate-setting, and financing. We also discuss in this chapter the different missions and responsibilities of PMAs, IOUs, and POGs. We believe that power customers are primarily concerned with production costs and resultant electricity rates, not whether the supplier is an IOU, POG, or PMA or whether the supplier is using coal, nuclear, or hydroelectric generation. Given increasing competition and electricity rates that are expected to fall, if the PMAs do not remain low-cost suppliers, then they may not be able to recover all power-related costs. Therefore, our discussion of the differences in power production costs between PMAs, IOUs, and POGs and the reasons for these differences is essential. The PMAs agreed with our statement in this chapter that PMAs are low-cost suppliers of electricity. However, the PMAs are concerned that our use of average revenue per kilowatthour (kWh) is overly simplistic and may mislead readers about the magnitude and causes of differences in costs between PMAs and other utilities. The PMAs do not believe average revenue per kWh takes into account differences in types of electricity sold that result in different prices. They believe a more accurate measure would be to compare similar products being offered by different utilities. The PMAs appear to be concurring with the results of our analysis but disagreeing with the methodology that led to those results. We continue to believe that the average revenue per kWh is a strong indicator of the relative power production costs of the PMAs as compared to IOUs and POGs. For PMAs and POGs, over time, average revenue per kWh should equal cost because each operates as a nonprofit organization that recovers costs through revenues. For IOUs, average revenue per kWh should represent cost plus the regulated rate of return. Given that a large portion of IOU rate of return (net income), 80 percent, is used to pay common stock dividends, which is a financing cost, average revenue per kWh also approximates power production costs for IOUs. We acknowledge in appendix I that we did not perform a detailed electricity rate comparison of PMAs to nonfederal utilities. We also state in this chapter that the price that any one utility charges another for wholesale energy comprises numerous factors. We believe that the PMAs’ alternative methodology of comparing similar products being offered would provide a reasonable rate or price comparison. However, as the PMAs note in their comments, this analysis would be difficult, and the PMAs themselves have not done it. Also, the PMAs’ proposed analysis would not necessarily result in a better indicator of relative production costs because different types of power may be sold above or below total production cost. Average revenue per kWh, on the other hand, better captures total production costs. The PMAs also stated that a related problem with using average revenue per kWh as a measure of the PMAs’ competitiveness is the variability in output of PMA hydropower projects. The PMAs believe our use of average revenue per kWh to indicate competitiveness could result in wide variations in a PMA’s competitive position from year to year. In order to address this factor, we reviewed the PMAs’ average revenue per kWh for 1990 through 1994. For each of these years, the PMAs’ average revenue per kWh was consistently at least 40 percent less than those of IOUs and POGs. We believe that this 5-year comparison is a strong indicator of the PMAs’ current competitiveness. The PMAs also expressed concern that the report gives greater focus to advantages enjoyed by the PMAs without giving equal attention to other costs that the PMAs’ customers must repay that would not normally be charged to nonfederal utility customers. The PMAs stated that we report that irrigation assistance is a large subsidy paid by Western’s customers and suggested that we also note other examples, such as future replacement costs, the Hoover Dam Visitor Center, payments in lieu of taxes, and billions of irrigation investments that are not even in service. We believe that our report provides an appropriate discussion of the relative advantages and disadvantages the PMAs have compared to nonfederal utilities. However, we believe the advantages outweigh the disadvantages. The PMAs’ use of hydropower plants built 30 to 60 years ago, tax-exempt status, unrecovered costs discussed in chapter 2, and the financing subsidy discussed in chapter 3, in aggregate, provide the PMAs with a substantial cost advantage compared to nonfederal utilities. We believe this large difference is reflected in the average revenue per kWh comparisons shown in this chapter and appendix V. We agree that the PMAs have disadvantages compared to nonfederal utilities, and we have more fully reflected those in this chapter. For example, we added the Hoover Dam Visitor Center as a nonpower cost that Western must recover through rates. However, we do not agree with the PMAs’ statement that our draft report said that irrigation assistance is a large subsidy paid by Western’s customers. Our draft report stated that “as of September 30, 1995, according to Western, about $32 million of the total $1.5 billion of total irrigation debt has been recovered through electricity rates.” To the extent that Western actually repays this irrigation debt, the power users are subsidizing irrigators. The billions of future irrigation investments that are not even in service are not costs that have been incurred, and it is questionable whether they ever will be incurred. To the extent that these planned future costs are included in Western’s power repayment studies and impact current rates, the actual application of any relevant power revenue would be to other appropriated debt. We believe that until these future irrigation costs are incurred and repaid, or funds are set aside for their future repayment, they do not represent a disadvantage to Western. The PMAs stated that Southwestern’s inclusion of future replacement costs in its current repayment study results in its rates being 10 to 15 percent greater than they would otherwise be. We do not agree with this statement. The actual application of the revenues generated by inclusion of these costs in current rates has been to current year capital appropriations or other appropriated debt. As a result, Southwestern has been able to pay off most of its recent, higher interest debt and currently has a weighted average interest rate of 2.9 percent compared to 4.4 percent for Southeastern and 5.5 percent for Western. In addition, as discussed in chapter 3, Southwestern has reduced its balance of appropriated debt from $769 million at September 30, 1991, to $686 million at September 30, 1995. Thus, we believe that Southwestern has managed its appropriated debt using sound business principles and has minimized its interest expense that must be recovered through rates. Another disadvantage cited by the PMAs relates to tentative project cost allocations. The PMAs stated that the tentative cost allocations may very well be higher, as in the case of the Clarence Cannon Project, than the final allocated costs. According to Southwestern’s 1995 annual report, there are four projects that still have tentative allocations. Southwestern states in this report that “he amount of adjustments that may be necessary when final allocations are approved for these projects is not presently determinable.” Because final allocations can either increase or decrease the percentage of costs allocated to power, the net effect of changes to allocations will not be known until all are finalized. Therefore, we do not believe that these tentative allocations represent a disadvantage to the PMAs.
Pursuant to a congressional request, GAO reviewed three power marketing administrations' (PMA) cost recovery practices, and financing for capital projects, focusing on how these PMA differ from nonfederal utilities. GAO found that: (1) the three PMA are not recovering through power rates some costs related to producing and marketing federal hydropower; (2) PMA are not recovering the full costs of providing postretirement health benefits and Civil Service Retirement System pensions to agency employees; (3) the Western Area Power Administration will probably not be able to recover the construction costs of its Washoe Project because it is not generating sufficient revenue to cover its operating and maintenance expenses and repay the federal investment; (4) Western is not required to repay the $454 million allocated to its incomplete irrigation facilities; (5) PMA rely primarily on debt financing for their large capital construction projects; and (6) compared to other nonfederal utilities, PMA benefit from their reliance on inexpensive hydropower, lower construction costs, and tax-exempt status.
According to EPA’s most recent estimate, in 1991 about 435,000 facilities used one or more aboveground tanks to store petroleum, refined petroleum products, and nonpetroleum oils. According to a trade group survey, about 83 percent of the tanks that store petroleum and petroleum products had a capacity of 500 barrels or less, while about 5 percent of the tanks had a capacity of 10,000 barrels or more. On occasion, these tanks may collapse suddenly or leak gradually over a period of years. For example, two major collapses in 1988 (releasing about 750,000 gallons in Pennsylvania and about 400,000 gallons in California, respectively) contaminated drinking water, damaged private property, killed wildlife, and disrupted businesses. Also, at a facility in Fairfax, Virginia, 100,000 or more gallons leaked into groundwater over a period of years, affecting an area of about 21 acres. Since the leak was discovered in 1990, EPA and others have undertaken extensive actions to monitor it and clean up the affected area. EPA regulates ASTs primarily under the authority of the Federal Water Pollution Control Act (also known as the Clean Water Act), as amended by the Oil Pollution Act of 1990. The Clean Water Act prohibits the discharge of oil into navigable waters and authorizes the issuance of rules establishing procedures, methods, equipment, and other requirements to prevent the discharge of oil from storage facilities. To implement these provisions, EPA promulgated the Oil Pollution Prevention Regulation in 1973. A facility is covered by this regulation if it (1) has an aboveground storage capacity of more than 660 gallons in any single tank, an aggregate aboveground storage capacity of more than 1,320 gallons, or a total underground storage capacity of more than 42,000 gallons; (2) could reasonably be expected to discharge oil in harmful quantities into the navigable waters of the United States; and (3) is not transportation-related. The regulation requires each AST owner or operator to prepare a spill prevention, control, and countermeasure (SPCC) plan. The plan is required to address (1) the design, operation, and maintenance procedures to prevent spills from occurring and (2) countermeasures to control, contain, clean up, and mitigate the effects of an oil spill that affects navigable water. The facility must arrange for a registered professional engineer to certify the plan and any significant changes to it. Following the issuance of our 1989 report, the Congress enacted the Oil Pollution Act of 1990. Among other things, the act expanded activities to prevent and prepare for oil spills and to improve facilities’ capability to respond to spills. As a result of major oil spills, such as the Pennsylvania spill discussed above, along with our 1989 report and similar findings by EPA itself, the agency proposed revisions to its Oil Pollution Prevention Regulation in October 1991 and February 1993. In February 1993, it also proposed rules to implement the 1990 act’s requirement that owners and operators of certain facilities submit “facility response plans.” These plans are required of facilities that, because of their location, could reasonably be expected to cause substantial harm to the environment by discharging oil into navigable waters or adjoining shorelines. In July 1994, EPA completed the portions of the 1993 rules governing facility response plans. However, EPA has not completed the 1991 proposed rulemaking and portions of the 1993 proposed rulemaking dealing with storage tank construction and testing, other portions of the Oil Pollution Prevention Regulation, and efforts to collect data for a national inventory of regulated facilities. In its entry in the May 8, 1995, Unified Agenda of Federal Regulations (a compilation of upcoming regulatory actions), EPA indicated that it did not expect to complete the rules before the end of March 1996. EPA inspects facilities to help ensure that they comply with the Oil Pollution Prevention Regulation. In our 1989 report, we stated that in fiscal year 1988, EPA inspected approximately 1,000 facilities. In fiscal year 1994, EPA’s 10 regional offices inspected 1,852 facilities. The four regions with the most inspections were Region 9 (San Francisco) with 350, Region 6 (Dallas) with 321, Region 10 (Seattle) with 300, and Region 3 (Philadelphia) with 257. In its 1990 response to our 1989 report, EPA generally said that it was considering or taking action to implement our seven recommendations on the regulation and inspection of ASTs. In two cases, it projected that action would be completed by the end of 1990. EPA has taken steps to strengthen the regulations for tank construction and contingency plans, although these steps do not fully implement our three recommendations. EPA officials said that further action on two of these recommendations is planned but that the timing is uncertain. EPA officials told us that the implementation of our recommendations to strengthen these regulations was delayed primarily because of the requirements imposed by the 1990 act and subsequently delegated to EPA. Among other things, the act mandated the issuance of rules requiring the preparation of facility response plans, required the development of area contingency plans, and required a study of the need for liners under ASTs; a report on the results of the study was due within 1 year. The officials said that implementation was also delayed by the difficulties that EPA encountered in obtaining OMB’s approval for a national inventory of regulated facilities. (The inventory is related to EPA’s inspection program, as explained in the next section.) An EPA official said that the agency prefers to complete the proposed rules on tank construction and contingency plan regulations together with the proposed rule relating to inspections. Mandating standards for tank construction and testing. In 1989, we reported that EPA’s rules did not incorporate specific standards for constructing and testing ASTs. Therefore, to decrease the chances of damaging oil spills in the future, we recommended that EPA require that ASTs be built and tested in accordance with the industry’s or other specified standards. The rules that EPA proposed in 1991 would strengthen the provisions dealing with tank construction but would not require adherence to the industry’s or other standards, a criterion specified in our recommendation. Specifically, the proposed rules would add a new recommendation that the construction, materials, installation, and use of tanks conform with the relevant portions of the industry’s standards but would not convert this recommendation into a requirement. In connection with the testing of ASTs, the proposed rules would considerably strengthen current provisions, which is consistent with our 1989 recommendation. The current Oil Pollution Prevention Regulation provides that ASTs “should” be subject to “periodic” integrity testing and “should” be “frequently” observed for leaks. By contrast, the proposed rules would “require” integrity testing every 5 years, unless the facility incorporates secondary containment features; in such cases, integrity testing would be required every 10 years and when major repairs are made. In addition, the proposed rules would require the facilities without secondary containment to conduct integrity and leak testing of their valves and piping at least annually. Minimizing damage from spilled oil. In 1989, we reported that EPA’s rules addressed containing spilled oil within tank facilities but did not require that tank owners and operators develop plans to deal with oil escaping in large quantities beyond the facilities’ boundaries. Therefore, we recommended that owners and operators be required to develop such response plans. Moreover, because spilled oil could be spread very rapidly through storm water drainage systems, we recommended that the rules require, not merely recommend, that such systems be designed and operated to prevent oil from passing through them. The rules issued pursuant to the 1990 act partially implement our recommendation on response plans. These rules, which became effective in August 1994, require that certain oil storage facility owners and operators prepare facility response plans for responding to “worst case” oil discharges or a substantial threat of such a discharge. According to an EPA official, EPA expected to receive such plans from about 6,000 facilities—roughly 1 percent of all covered facilities—that pose the greatest risk to the environment. (We were told that approximately 4,500 facilities had submitted plans as of April 1995.) The official said that EPA had no plans to expand the overall rules to cover additional facilities. However, he also noted that the current rules permit EPA’s regional administrators to require the submission of a response plan by certain other facilities that have been determined on a case-by-case basis to present an unusual risk. He estimated that about 1,000 such facilities might be required to submit a response plan. The 1991 proposed rules address our recommendation on storm water drainage systems by replacing the guidelines in the current rules with requirements. Generally, the rules would require that drainage from diked storage areas must be restrained by valves or other means to prevent a spill or other excessive leakage of oil into the drainage system. EPA is taking steps to implement all four of our recommendations to strengthen the inspection program. However, according to EPA officials, three recommendations will not be fully implemented until 1996, and they are uncertain when the fourth recommendation will be implemented. The officials explained that meeting the requirements of the 1990 act was the primary reason why these recommendations were not implemented earlier. Difficulty in securing OMB’s approval to collect data for a national inventory of regulated facilities also delayed implementation. In 1989, we reported that EPA had not issued national guidance on how to select facilities for inspection, even though selectivity is necessary since the industry is large and inspection resources are limited. EPA could not develop effective inspection priorities because it had little information on the number of facilities or tanks or on their size, age, location, or quality of construction. It needed this type of information to target for inspection those facilities that posed the greatest environmental risk. Accordingly, we recommended that EPA develop a system of inspection priorities on the basis of a national inventory of tanks. We found that EPA is working to develop a national inventory of tanks and to develop inspection priorities. In 1991, EPA sought OMB’s approval to collect data from all facilities that might be covered by the Oil Pollution Prevention Regulation. However, OMB stated that EPA had not adequately justified the proposed reporting requirements and did not approve the request. EPA is undertaking a more limited survey of about 30,000 facilities that are considered most likely to be covered by the Oil Pollution Prevention Regulation. After a pilot survey in 1994, the survey instrument was mailed out in April 1995, and the results of the survey are expected in late 1995. The survey requests information on the facilities’ characteristics and operations, the oil tanks’ storage capacity and the product stored, and recent oil spills. Depending on the results of this survey, EPA may seek OMB’s approval to collect limited data from all facilities. EPA also expects to use this survey to provide information on regulated facilities. For example, the information could be used to provide a basis for developing inspection priorities. Such targeting is still needed because only a small fraction of the total number of facilities is inspected each year. As previously noted, the number of facilities inspected by EPA nearly doubled between fiscal years 1988 and 1994. Despite the increase, however, EPA inspected less than one-half of 1 percent of all facilities. Although EPA has not established overall inspection priorities, it has identified one national priority. It established an expectation that each region will, between fiscal years 1995 and 1997, inspect all of the facilities located in that region that are required to prepare a facility response plan. Meanwhile, we were told that EPA has taken other steps to help its regional offices identify the facilities that are likely to be covered by the Oil Pollution Prevention Regulation. For example, EPA obtained Dun & Bradstreet data on the facilities in those industries that are considered likely to be regulated and provided this information, for the individual states in each region, to its regional offices. According to officials in Regions 3 and 6, which we visited for this review, neither region has a complete inventory of the facilities in the states it covers. Region 3’s SPCC coordinator told us that the region drafted its own targeting strategy in December 1994. She said that various criteria are used to select the facilities to be inspected. These criteria include a facility’s spill history, the facility’s potential to cause significant and substantial harm to the environment, and referrals from federal, state, or local government officials or the public. Similarly, according to a Region 6 official, the region targets inspections in the five states it covers by using data on such factors as spill histories, water supplies, and sensitive ecosystems. The region also considers referrals from states and other federal agencies and citizens’ complaints. In 1989, we reported that EPA headquarters had not required its regions to follow uniform procedures for conducting and documenting inspections. Moreover, the four EPA regions we visited at that time also had not developed written procedures on how to conduct inspections. Regional officials told us that they relied on the experience and knowledge of individual inspectors rather than on written procedures. To help ensure that inspections are performed thoroughly, establish a record of facilities’ compliance with the rules, and help pinpoint overall problem areas in the industry, we recommended that EPA develop instructions for performing and documenting inspections. We found that EPA headquarters still has not developed such instructions, although work to develop uniform procedures has begun. Headquarters officials collected the various regions’ instructions, circulated them to officials in other regions in May, and asked for their comments. The cognizant headquarters official said that he hopes to complete the development of uniform procedures by late 1995. We found both similarities and differences in the inspection procedures and documentation developed by the two regions we visited. For example, staff in both regions collect information about a facility before inspecting it. Region 3 staff said that they check whether the facility has had any reported oil spills and may check with the state’s environmental agency for relevant information. Region 6 staff said that they typically visit a facility known to contain ASTs. During this visit, they take photographs and record their observations of the facility’s general condition. We noted a difference in regional practices with respect to advance notification. Region 3 staff told us that they usually do not contact a facility before arriving to inspect it. Region 6 staff told us they do notify the facility in advance that they intend to conduct an SPCC inspection. Both regions developed inspection checklists, which list items to be checked and also provide a standard format for documenting the inspection results. Region 3 uses a single checklist for all types of facilities that documents both the inspection of the facility and the review of its spill response plan. Region 6 uses one checklist for inspecting a facility and another for reviewing its SPCC plan and also uses different checklists for different types of facilities. In 1989, we reported that EPA headquarters had not defined training needs for inspectors. As a result, each EPA region established a training program using different program styles, curricula, and manuals. While most regions had developed training manuals, their contents and use varied from region to region. We concluded that while some regional differences in the oil storage industry may justify some differences in the training of inspectors, because the Oil Pollution Prevention Regulation is national in scope, inspectors should possess a common body of knowledge and a minimum level of skills to implement the regulation. We recommended that EPA define and implement minimum training needs for inspectors. We found that EPA still has no national guidance on the training of SPCC inspectors. However, headquarters officials told us that a work group has begun developing such guidance and should complete it in early 1996. Meanwhile, EPA has funded some training-related activities in the regions. EPA headquarters provided an average of approximately $900,000 a year in fiscal years 1992 through 1994 to selected regions to support training and other activities related to enforcing the Clean Water Act. For example, Region 6 developed a series of videotapes that are used to train AST inspectors, among other purposes, and shared them with other regions. In 1989, we reported that in the four EPA regions we visited, many of the oil storage facilities that were inspected were found to be out of compliance with the Oil Pollution Prevention Regulation. Nevertheless, EPA rarely imposed penalties (up to $5,000 a day), in part because it lacked national guidance for this action. We recommended that EPA establish a national policy for fining violators. We found that there is still no final policy on fining violators, although a senior attorney in EPA’s Office of Enforcement and Compliance Assurance told us that draft guidance on fining violators has been developed and was provided to the regions for their guidance in 1993. This official said that he hoped the policy would be completed by the end of 1995. Region 3 officials told us that they rely on the draft penalty guidance in dealing with companies found not to be in compliance with the rules. For example, they said that they had used the guidance in calculating a substantial penalty against a certain company. However, a senior regional attorney told us that, in his opinion, courts would more readily defer to a final policy than to a draft policy. Region 6 officials told us that they rarely pursue fines against companies not in compliance, even though they found that about 80 percent of the facilities inspected in fiscal years 1993 and 1994 were out of compliance. They said that they prefer to work with companies to bring their facilities into compliance. Also, they can conduct many more inspections and bring more facilities into compliance if they do not divert resources to pursue enforcement action against companies. As in Region 3, a Region 6 attorney agreed that a final policy would carry more weight with the courts. EPA generally agreed with the seven recommendations in our 1989 report on the regulation and inspection of ASTs, and it has taken some steps to implement them. In 1994, EPA partially implemented our recommendation on contingency planning, and by 1996 it expects to implement three more recommendations (on inspection procedures and documentation, training for inspectors, and penalties for noncompliance). EPA is uncertain when the other three recommendations (on tank construction and design and on targeting inspections) will be implemented. Implementing all of our recommendations will help EPA ensure that the nation’s ASTs are being properly regulated and inspected and that human health and the environment are safeguarded from the effects of oil spills. In performing this follow-up work on the regulation and inspection of aboveground storage tanks, we (1) reviewed applicable laws and regulations; (2) interviewed officials in EPA’s headquarters (Washington, D.C., and Crystal City, Virginia), Region 3 (Philadelphia), and Region 6 (Dallas); and (3) reviewed relevant records. The activities in these two regions may not be representative of the activities in all EPA regions, but as agreed with your offices, we selected these regions because they have relatively active SPCC programs and because they oversee diverse types of facilities. We did not evaluate the effectiveness of EPA’s actions to date. Also, we did not independently verify the data provided by EPA officials. We conducted our work between February and May 1995 in accordance with generally accepted government auditing standards. We requested comments on a draft of this report from EPA. On May 31, 1995, we met with the Acting Chief of the Oil Pollution Response and Abatement Branch to obtain the agency’s comments on the draft report. During our meeting, he told us that he generally agreed with the facts presented and the conclusions reached. He identified several areas where he believed that we could present a fuller picture of relevant developments. We revised these areas accordingly. In addition, he provided updated information and technical corrections in a few cases, which we included where appropriate. As arranged with your offices, we plan no further distribution of this report until 30 days from the date of this letter, unless you publicly announce its contents earlier. Upon release, we will send copies to the Administrator of EPA and will make copies available to others on request. If you have questions, I can be reached at (202) 512-6111. Other major contributors to this report are listed in appendix III. In 1989, we made seven recommendations to the Administrator of the Environmental Protection Agency (EPA) in order to improve the regulation and inspection of aboveground oil storage tanks. These are listed below. To improve the likelihood that aboveground oil storage tanks are built to the industry’s standards and decrease the chances of future damaging oil spills, we recommended that the Administrator amend the applicable regulations to require that aboveground oil storage tanks be built and tested in accordance with the industry’s or other specified standards; facilities plan how to react to a spill that overflows their boundaries; and storm water drainage systems be designed and operated to prevent oil from escaping through them. To better ensure the safety of the nation’s aboveground oil storage facilities and decrease the chances of oil being discharged into the environment, we recommended that the Administrator strengthen EPA’s aboveground oil storage facility inspection program by developing, in coordination with state and local authorities, a system of inspection priorities on the basis of a national inventory of tanks; developing instructions for performing and documenting inspections; defining and implementing minimum training needs for inspectors; and establishing a national policy for fining violators. As requested by your offices, we are providing data on various characteristics of aboveground storage tanks from studies done by the Environmental Protection Agency, the American Petroleum Institute (API), and the Environmental Defense Fund (EDF). The data provide broad estimates on the numbers, ages, and locations of oil storage facilities; the construction and operation of aboveground tanks at these facilities; and estimates of leaking tanks and their potential adverse effects. We did not assess the accuracy or reliability of the information presented. EPA officials told us that because of a lack of data on ASTs, and in view of several oil pollution incidents, such as the contamination of property in Fairfax, Virginia, the agency in recent years has undertaken several AST studies. In a January 1991 study, EPA estimated the numbers of facilities in 16 industrial categories that meet the storage capacity requirements of the Oil Spill Prevention, Control and Countermeasures (SPCC) program established under section 311(j) of the Clean Water Act. In response to proposed October 1991 revisions to the agency’s Oil Pollution Prevention Regulation, EPA refined its estimate of the number of facilities covered by the SPCC program’s requirements by excluding certain facilities with underground tanks that were covered under other EPA regulations. In August 1994, EPA’s Aboveground Oil Storage Facilities Workgroup produced a draft study of the problem of soil and groundwater contamination due to oil spills and leaks from facilities with ASTs. In December 1994, the agency produced a draft study required by section 4113(a) of the Oil Pollution Act of 1990 that assessed the technical and economic feasibility of using liners and related systems to detect leaking oil and to prevent it from contaminating soil and navigable waters. API has also been active in studying ASTs and publishing AST standards for its members. In April 1989, API published a widely cited Aboveground Storage Tank Survey performed under contract by Entropy Limited that covered the numbers of tanks and their ages, capacities, and construction in all segments of the petroleum industry, namely marketing, refining, transportation, and production. A second API member survey, published in July 1994, among other things ranked the sources of groundwater contamination from ASTs. A series of API standards issued in 1987 and during the 1990s set industry standards for such things as tank inspection, repair, alteration, and reconstruction; tank design, construction, operation, and maintenance; and the establishment of a program to certify inspectors. EDF published a report on the regulation of ASTs in February 1993. EDF’s report addressed pollution prevention, groundwater monitoring, reporting of underground leaks, and cleanup and release containment. In 1991, EPA estimated that about 435,000 facilities (a facility could have one or more tanks) were required to develop SPCC plans under the Oil Pollution Prevention Regulation. The regulation applies to non-transportation-related facilities that have the potential to discharge oil to waters of the United States in quantities that may be harmful and that have oil storage capacities greater than 42,000 gallons underground, greater than 1,320 gallons aboveground, or greater than 660 gallons in a single tank aboveground. Table II.1 shows EPA’s estimate. API’s April 1989 survey estimated that about 700,000 aboveground tanks (as opposed to EPA’s estimate of 435,000 facilities) were used in the marketing, refining, transportation, and production segments of the petroleum industry. Although the survey excluded tanks at user locations (e.g., vehicle rental locations), API believed them to be a small part of the total tank population. API’s definition of capacity of ASTs was basically 1,100 gallons (26 barrels) or greater. Table II.2 shows API’s estimate. Total capacity (thousands of barrels) Marketing includes petroleum products stored for wholesale or for direct sale to users, including tank farm distribution centers as well as gasoline retail stations and home heating supply distributors. Refining includes refineries at which crude oil is chemically and physically treated to produce a variety of petroleum products, including gasoline, diesel fuel, and jet fuels. Transportation includes pipeline operations at which large quantities of crude or refined product are stored until they can be transported offsite by pipelines to refineries or to marketers. Production includes facilities at which crude oil coming from the ground is gathered and stored until it can be delivered to refineries. EDF, using API data, estimated that there were at least 800,000 to 900,000 aboveground petroleum tanks nationwide. EDF added 100,000 to 200,000 tanks to API’s 1989 estimate to account for small distribution facilities not counted by API. Besides petroleum tanks, EDF also estimated that there are an additional 200,000 aboveground tanks storing hazardous products (e.g., chemical industry products and raw materials). Although the Oil Pollution Act of 1990 covers hazardous products, EPA has actively regulated only oil-containing ASTs and underground storage tanks under the SPCC program. According to an SPCC program official, EPA has not implemented provisions of the Oil Pollution Act of 1990 requiring facility response plans for hazardous substances because hazardous substances are covered by other statutes, such as the Clean Air Act, the Comprehensive Environmental Response, Compensation, and Liability Act of 1980 (Superfund), the Occupational Safety and Health Act, and the Resource Conservation and Recovery Act of 1976. EPA, however, is currently studying a plan to incorporate hazardous substances into facility response plans. API’s 1989 survey estimated that over 80 percent of ASTs have storage capacities of 500 barrels (21,000 gallons) or less, as shown in table II.3. The survey also shows that while about 83 percent of the generally smaller production tanks were shop-fabricated, about 95 percent of the generally larger refining tanks were reconstructed, meaning that the tanks were dismantled at one place of service and rebuilt at another, or were riveted, bolted, or welded in the field. Furthermore, the ages of tanks differed significantly by industry sector. API’s survey showed that of the tanks whose ages were known, 8 percent of tanks used in production were over 30 years old, while 64 percent of tanks used in refining were over 30 years old. Table II.4 shows the results of API’s survey of tank ages. Age (years) According to an SPCC program project manager, the tanks in API’s universe are representative of larger facilities that may have proportionately larger tanks than those included in EPA’s estimate of facilities covered by the SPCC program. The official said that larger tanks tend to be field-erected, while smaller tanks are built in factories as prefabricated units and delivered to sites. As shown in table II.5, API’s April 1989 survey estimated that the following petroleum products were stored in ASTs at marketing, refining, and transportation facilities. The product stored at production facilities is primarily crude oil. API’s April 1989 Survey estimated state-by-state totals for ASTs used in production. The 31 states covered by API are shown in table II.6. According to EPA officials, comprehensive data do not exist to quantify adequately the extent to which ASTs are leaking. Accordingly, EPA developed an approach to estimate the number of ASTs leaking oil and the corresponding volume of the products leaked. EPA developed a relationship between the age of ASTs and tank failure rates. Key data sources for this analysis were API’s April 1989 survey, which provided data on the age and storage capacity of ASTs, and a 1988 study of tank failure rates. Table II.7 shows EPA’s preliminary estimates of leaking ASTs by storage capacity tier from the draft December 1994 liner study. Storage capacity (gallons) EPA has found that leaks typically originate from the bottom of vertical ASTs as a result of perforations often caused by corrosion. Underground piping was also identified as a significant potential source of leaking oil at AST facilities. API’s July 1994 AST survey report stated that during the past 5 years, groundwater contamination appears to have been caused by a variety of minor sources. Additionally, the survey data noted that AST bottom leaks were not a major source of contamination. Survey respondents indicated that less than 3.6 percent of ASTs (in all age categories) had confirmed bottom failures within the past 5 years. The survey report stated that pressurized buried piping has been the most predominant source of contamination in all three sectors over the past 5 years. EPA estimated oil leaks for 75,000 tanks in the petroleum industry with a storage capacity in excess of 42,000 gallons. On the basis of the age of ASTs, the likelihood of developing corrosion leaks, and leak detection thresholds, EPA’s preliminary estimates show that ASTs could be leaking between 43 million and 54 million gallons of oil annually. Regarding threat, EPA has found that oil discharge incidents have the potential to cause widespread damage, including contamination of soil, groundwater, and surface water supplies and loss of property. Because several hundred thousand onshore facilities with ASTs are located throughout the United States—many are near sensitive environments, including groundwater and surface water—discharges from ASTs represent a potentially significant environmental hazard. In addition, EPA has stated that oil spill incidents can pose risks to human health. According to EPA, although the extent of injuries is unknown, most known injuries to human beings from exposure to oil have occurred as a result of their inhaling its vapors. Effects on humans from exposure to oil include generalized weakness, lethargy, dizziness, convulsions, coma, and death from acute exposure to volatilized constituents by inhalation; cancers of various organs; blood cancers such as leukemia; and generalized suppression of the immune system from chronic exposure by inhalation. API’s July 1994 member survey found that 78 percent of refining and 54 percent of marketing facilities have 75 percent or more of their AST-associated piping aboveground. In contrast, most transportation facilities leave the AST-associated piping below ground. According to the report, there are several reasons why the AST-associated piping is buried at transportation facilities. For example, these facilities are frequently remotely located, and as a result, piping is buried to prevent vandalism. The report noted that in certain situations, piping can be moved aboveground. However, safety and operational considerations may require that piping be buried. Inspections, emergency access, repair, exposure to radiant heat, expected settlement, earthquakes, thermal expansion/contraction, tank drainage, and susceptibility to vandalism are all considered when deciding to install piping above or below ground. The survey report stated that where operational and safety considerations allow, the relocation of older buried piping aboveground has been an ongoing practice at facilities in the refining, marketing, and transportation sectors for a number of years. Secondary containment structures are typically designed to contain the entire contents of the tank or tank battery within the structure and serve to contain any spilled oil or product in the event of a leak or sudden discharge. EPA found that secondary containment structures vary greatly, depending on the size of the tanks and the physical characteristics of the facility, and may be constructed of compacted soil, clay, concrete, or other synthetic material. Each of the different types of liners, such as impervious soil, coated or uncoated concrete, and geomembrane liners, can be effective in preventing groundwater contamination and in detecting leaks if properly installed and maintained. Poor maintenance can significantly reduce the effectiveness of certain types of liners. According to EPA, current technology has produced a variety of leak detection systems, including alarms, inventory control, acoustic emissions testing, and volumetric measurement, and industry is aggressively developing technology to make leak detection more reliable. Leak detection methods are either continuous or periodic. Continuous methods provide uninterrupted monitoring and, consequently, instant notification of tank failure or an oil discharge. Examples of continuous systems are overfill alarms and overfill sumps. Periodic leak detection involves checks or tests at regular intervals to determine the occurrence of oil discharges or tank bottom failure. Periodic systems include internal/external visual inspections, pressure/vacuum testing of tanks and piping, volumetric precision testing of the tank, inventory record and measurement reconciliation, acoustic emissions testing, and chemical gas detection methods. Karen Keegan, Senior Attorney The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (301) 258-4097 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the Environmental Protection Agency's (EPA) actions to address weaknesses in the regulation and inspection of aboveground oil storage tanks (AST), and provided information on the age, size, and other characteristics of AST. GAO found that: (1) EPA has not fully implemented any of its seven recommendations to improve the safety of aboveground oil storage tanks; (2) EPA has only partially implemented the recommendations because it gave higher priority to implementing new legislative requirements and had difficulty obtaining Office of Management and Budget approval to collect data for a national inventory of regulated facilities; (3) EPA has partially implemented one of three recommendations to strengthen its regulations governing storage tank construction; (4) proposed regulations emphasize, but do not require, that tank construction comply with certain standards and recommend that tanks be periodically tested; (5) EPA has required the facilities that pose the greatest environmental risk to develop response plans to minimize damages from spilled oil, but it has no plans to extend the requirement to other facilities; and (6) EPA expects to implement three recommendations on improving inspection procedures and documentation, training inspectors, and establishing penalties for noncompliance by 1996, but it does not know when the fourth recommendation on targeting inspections will be implemented.
Security at federal courthouses is complex and involves multiple federal stakeholders with different roles and responsibilities (see fig. 1). A 1997 memorandum of agreement (MOA) between these entities defines the roles and responsibilities for each of these stakeholders in protecting federal courthouses and the federal framework for securing courthouses. The MOA recognized areas in which stakeholders are to coordinate their security efforts and established an informal collaboration and oversight mechanism at the regional level. The following federal stakeholders receive funding for court security activities in different ways: FPS is funded by the security fees it collects from agencies that occupy GSA facilities for the security services FPS provides and does not receive a direct appropriation. The judiciary receives a court security appropriation. The amount for fiscal year 2016 was approximately $538 million. AOUSC uses part of this appropriation to pay for FPS fees and transfers part to the Marshals Service for specific judiciary related costs or security equipment. In addition to the funds received from AOUSC, the Marshals Service receives direct appropriations for construction in space controlled, occupied, or used by the Marshals Service for prisoner holding and related support (for example, vehicle sally ports and prisoner elevators). Instead of receiving direct appropriations, GSA administers the Federal Buildings Fund, which is the primary source of funds for operating federal space held under the custody and control of GSA and the capital costs associated with the space. The Federal Buildings Fund is funded primarily by income from rental charges assessed to tenant agencies occupying GSA-held and -leased space that approximate commercial rates for comparable space and services. Congress exercises control over the Federal Buildings Fund through the appropriations process that sets annual limits— called obligational authority—on how much of the fund can be obligated for various activities. GSA, as an executive branch agency, requests obligational authority from Congress as part of the annual President’s Budget Request. GSA’s total obligational authority for fiscal year 2016 was approximately $10.2 billion. The Interagency Security Committee (ISC) addresses the quality and effectiveness of physical security for federal facilities, including courthouses. The ISC sets out the risk management process for federal facilities in the ISC’s risk management standard. Pursuant to this standard, FPS conducts facility security assessments, which consist of identifying and assessing threats to, and vulnerabilities of, a facility as well as identifying countermeasures (e.g., security equipment) best suited to mitigate vulnerabilities at the facility. These assessments generally focus on building systems and perimeter and entry issues. The ISC risk management standard also lays out standards for establishing facility security committees, which consist of a representative from each of the tenant agencies in the facility, and which are responsible for addressing security issues identified in the facility security assessment and approving the implementation of recommended security countermeasures. These standards include the following: facility security committees are established when two or more federal tenants with funding authority occupy a facility, findings from the FPS facility security assessments are to be presented at facility security committee meetings, and meeting minutes must document each vote to approve or disapprove a recommended countermeasure, and if agenda decisions are disapproved, the meeting minutes must document the chosen risk management strategy. As new threats to federal facilities have emerged, the federal government has released additional directives related to the security of federal facilities, including courthouses. For example: The National Strategy for the Physical Protection of Critical Infrastructures and Key Assets. Following the attacks on September 11, 2001, the White House developed this National Strategy to ensure that initial efforts to protect key assets were sustained over the long term. Courthouse security falls under the National Strategy which outlines the guiding principles that underpin national efforts to secure infrastructure and assets vital to public health and safety, national security, governance, economy, and public confidence. The National Infrastructure Protection Plan. DHS developed this plan to guide the national effort to manage risks to critical infrastructure. Identifying security concerns at federal courthouses is critical to managing the risk to those courthouses. We previously compiled a risk management framework applicable to protecting federal facilities that defined risk management in general as managing across a portfolio. We have also issued other reports in recent years that discuss the importance of understanding risk comprehensively (rather than only on an individual building basis) in order to effectively protect federal facilities consistent with that definition. AOUSC collects security information in a way that provides a picture of portfolio-wide concerns and can be used to comprehensively understand security concerns across the portfolio of federal courthouses. AOUSC assesses and scores courthouses on the security features of court operations, in accordance with the U.S. Courts Design Guide as part of their long-range capital-planning process, according to AOUSC officials. Through this process, AOUSC develops security scores for courthouses that range from 0 to 100, with 100 being an ideal courthouse that meets all assessed security factors, as determined by the judiciary. These scores allow the judiciary to compare security needs across courthouses and understand the relative security deficiencies of one courthouse compared to others. AOUSC has three categories to describe these security scores: below 60 is poor, 60–79 is marginal to acceptable, and 80–100 is good. AOUSC’s scores reflect different aspects of courthouse security, such as whether the courthouse has separate pathways for judicial personnel, prisoners, jury members, and the public; secured parking for judges; vehicle sally ports for prisoner transport; an adequate number of courtroom holding cells; and physical barriers to block unwarranted vehicular access. While AOUSC’s security scores consider some aspects of security on the perimeter and in space where prisoners are held, detailed assessments of these aspects of security are the responsibility of FPS and the Marshals Service, consistent with their missions. The Marshals Service and FPS also identify security concerns at individual courthouse facilities, focused on their respective missions, but unlike AOUSC, they do not currently collect this information in a way that it can be readily compared across the portfolio of courthouses to gauge the overall concerns with these buildings. As discussed below, the Marshals Service identifies security concerns through two kinds of project requests to address security concerns. The Marshals Service is taking steps to improve the information it collects; however, these steps may not enable it to understand concerns portfolio-wide as defined by our risk management framework, because of the reasons discussed below. Marshals Service officials told us that previously, they had no means to prioritize among project requests to correct deficiencies in judicial space, such as those in courtrooms. The Marshals Service is piloting an initiative to create a means of prioritizing these requests into three levels of priority. However, Marshals Service officials told us that they still will not be able to compare similar concerns from one courthouse to another once the improvements to the process are made, because similar concerns would fall into the same priority level, and the initiative does not have a method for prioritizing within the same priority level. For projects to correct deficiencies in Marshals Service space, such as the areas used to move prisoners throughout the courthouse, headquarters Marshals Service officials told us that they currently rely on institutional knowledge to evaluate requests. Marshals Service officials said that it can be difficult to determine which projects to fund and not all officials would arrive at the same decisions, as there is currently no standard process for reviewing project requests and making funding decisions. To improve this process, the Marshals Service is developing a decision matrix to document how decisions are made, but officials said they were not sure if this process would result in a way to compare projects as part of the portfolio of courthouses, as they are still early in the process of developing the matrix. FPS conducts facility security assessments of individual buildings, including courthouses. These assessments consist of identifying and assessing threats to and vulnerabilities of a facility, for example, whether security equipment is working properly. FPS shares these assessments and recommendations for countermeasures with the building’s facility security committee as part of the security services it provides to its customer agencies, and the facility security committee votes on whether to approve or disapprove suggested countermeasures. Information on the status of FPS countermeasure recommendations— whether facility security committees have accepted, rejected, or not made a decision—can provide insight into the level of risk tenant agencies accept at a particular facility and enables risk-informed decisions. FPS began tracking the facility security committee decisions at the individual facility level in fiscal year 2015. Our prior work has found that the tool FPS uses to conduct facility security assessments was not designed to compare risks across federal facilities. FPS officials recognize the value of being able to analyze countermeasures across courthouses and other federal buildings. They said that this information would provide a greater understanding of which countermeasures were consistently accepted or rejected, which could help FPS make better recommendations for all federal buildings, not just courthouses, in its facility security assessments. For example, if FPS knew that a particular recommendation was frequently rejected because it is cost prohibitive, FPS might look for another less costly option to mitigate that deficiency, according to officials. FPS is also pursuing the capability to track the status of countermeasures in an automated way as part of initial plans for a software upgrade for its vulnerability assessment tool that allows FPS inspectors to review recommended countermeasures, among other things. However, officials were not certain when, or if, this capability will be ultimately included in the upgrade. FPS officials said that absent the capability to track countermeasure status in an automated way, obtaining information on whether countermeasure recommendations are accepted or rejected across all courthouses (or analyzing it by other variables) would be a labor-intensive process because relevant data are not easy to retrieve and would have to be done so manually. FPS officials said that there might be other ways to obtain this capability, but so far, they have not developed them. Further, FPS officials said that facility security committees often do not report whether they are approving or disapproving a countermeasure, even though the ISC standard calls for approval or disapproval to be documented in the facility security assessment. Tracking information on countermeasure implementation across the portfolio could help hold facility security committees accountable for their responsibilities under the ISC standard. The improvements that both agencies are making to their information on security concerns are promising but may not provide the portfolio-wide information that decision makers need to make risk-informed decisions. Portfolio-wide information could enhance the way that headquarters Marshals Service officials make decisions when selecting security projects, so that the selections address the most urgent needs and FPS could be in a better position to understand the degree to which facility security committees are accepting risk at federal facilities. Congress provided $20 million in obligational authority for the CSP in the Consolidated Appropriations Act, 2012 and also provided obligational authority for the program for fiscal years 2013, 2015, and 2016, which GSA has designated for 11 projects in 10 locations. The program, which is funded from GSA’s Federal Buildings Fund, is intended to address security deficiencies in existing buildings where physical renovations (“brick and mortar solutions”) are viable, and to provide a vehicle for addressing security deficiencies in a timely and less costly manner than constructing a new courthouse. Program goals include: (a) utilizing existing building assets and government resources cost-effectively; (b) addressing security deficiencies which put the public and government staff at risk; and (c) providing a low-cost alternative to high-cost capital investments. Courts with adequate space to house judicial officers but with poor physical security are eligible to participate because such courts are unlikely to obtain a new courthouse in the foreseeable future. As of March 2016, two projects had been completed, two were in construction, four were in design, and three had not yet begun design, as shown in Table 1. CSP projects are designed to improve the separation of circulation in accordance with the U.S. Courts Design Guide, which states that an essential element of security design is the physical separation of the public, judges, and prisoners into three separate paths of circulation so that trial participants do not meet until they are in the courtroom during formal court proceedings. AOUSC officials told us that having three separate paths of circulation is important so that judges are protected from being influenced or threatened by parties to court proceedings, their families, or other members of the public when entering and circulating through a courthouse. They also told us that criminal defendants pose a security risk to co-defendants, witnesses, and the general public. Some of the CSP improvements to address these separate paths of circulation include: Adding or enlarging sally ports: Some federal courthouses have no vehicle sally port (or an inadequate one) for the Marshals Service to load and unload prisoners. Building secure parking for judges: Some federal courthouses do not have a secure place for judges to park and enter the building. Adding elevators for prisoners and/or judges: In some of the older courthouses, the structure of the building and location of elevators may not permit three separate paths of circulation. Reconfiguring space to provide secure patterns of circulation: Some federal courthouses cannot accommodate the three separate paths of circulation without space reconfiguration. While CSP projects may not address every security deficiency in a building, officials at locations that have been selected for a CSP project told us that the projects will provide (or have provided) significant improvements to security at those locations. For example, a local GSA official said that the security changes as a result of a completed CSP project has created a “night and day” difference in the overall security of the building as the parking and circulation issues have been addressed. In addition, local Marshals Service officials said that when the CSP project at their location is completed, it will address their highest security priorities and improve security. AOUSC officials have re-evaluated their security scores for the two projects that have been completed, and the security scores have improved. At one location the security score increased from 46.1 (poor) to 80.2 (good), and at the other, the score increased from 58.9 (poor) to 68.2 (marginal to acceptable). The process used to select potential CSP project locations has continued to evolve since the program began in 2012, and as a result, transparency and collaboration related to potential CSP project location selections and program execution have improved. According to OMB’s directive on open government, transparency promotes accountability and collaboration improves the effectiveness of government by encouraging partnerships and cooperation. Similarly, our prior work has recognized that leading practices for capital-planning include that an agency’s project prioritization process be transparent about how project rankings are determined, among other things. In addition, our prior work also recognized that collaboration is key to ensuring the efficient use of limited resources to address issues that cut across more than one agency, and that collaboration ensures that federal efforts draw on the expertise of the involved agencies. At the time of our review, two rounds of CSP project location selections had been finished, and a third round was underway. With each round, the transparency of selection evolved as more criteria were added for selection and more people were involved in the selection process. More specifically: During the first round of selections, for fiscal year 2012 only, according to AOUSC officials, AOUSC selected four project locations using professional judgment informed by the expertise of GSA and the Marshals Service to get them started quickly because a report and spending plan on program implementation had to be submitted within 90 days of the enactment of the Consolidated Appropriations Act, 2012. For example, one location was chosen because it had an existing concept study that could be used as a basis for the project, and another was prioritized, in part, because of a threat to a judge at that location, according to AOUSC officials. For the second round of project location selections, for projects funded in fiscal years 2013 through 2017, criteria were developed for selection following a two-step process. First, the judiciary developed a preliminary list of project location candidates following a set of “Go/No Go” factors. For example, only courthouse facilities that were federally owned and had resident judges were eligible for selection. Second, AOUSC conducted what it referred to as a “deep dive analysis” that involved a number of factors. While this allowed greater insight into how locations were selected, from our review of AOUSC documents, we noted that there was still a lack of clarity about how some of these factors would be measured. For example, one factor was “type of caseload and proceedings” meaning that a “significant” number of criminal proceedings are conducted in the facility, but it was not clear how locations were evaluated on these criteria. As a result, for the CSP projects selected in round two, we were unable to determine how the criteria were used to prioritize project locations amongst each other and why certain project locations were ultimately selected over others because the decisions (such as why some were selected and others were eliminated from contention) were not documented and it was not clear how all criteria were defined and applied. The process for the third round of potential CSP project location selections (for projects 2018 and beyond) contained additional improvements. AOUSC and federal stakeholders added refinements to the existing two-step process and some additional steps after the “Go/No Go” factors and “deep dive analysis” including: (a) a series of internal judiciary review meetings that further narrowed the list of candidates based on first-hand knowledge and observations; (b) meetings between AOUSC, the Marshals Service, and GSA in March 2016 to narrow the remaining potential locations into three tiers based on security scores as well as other factors; (c) reviews of the feasibility of a project at the top eight locations, and (d) selecting four of those locations for consideration. From our review of AOUSC documents, we noted that this round of project location selections provided important transparency improvements. For example, during this round the CSP had a greater emphasis on buildings with poor security scores—quantitative information that can be objectively reviewed. In round three, only locations with poor security scores (below 60) were considered for the program, and the Judiciary’s Space and Facilities Committee approved four locations for a CSP study in June 2016 that have security scores less than 30 (only 4 of the 10 previous locations chosen for CSP projects had security scores less than 30, see Table 1). In addition to transparency improvements related to the selection process, federal stakeholders have enhanced their collaboration during CSP project execution. GSA officials said that they were not involved in developing the scopes of work for the original four projects in fiscal year 2012 and the corresponding cost estimates for them. As a result, the project concept studies did not consider GSA’s mechanical, engineering, and plumbing standards, which are considered for concepts in other capital projects. GSA officials said that this led to inaccurate estimates and delays in the execution of some 2012 projects. For example, GSA officials said that during one 2012 funded project, they were not consulted on the concept and estimate, and that the estimate was under by about 30 percent, which they said is a significant deviation. After AOUSC conferred with stakeholders on needed improvements in the second round of CSP project selections (fiscal years 2013–2017), GSA officials said that they began reviewing the cost estimates and providing comments to AOUSC. Further, GSA officials said that AOUSC now seeks their expertise on assumptions developed in the concept studies before developing an estimate, which has minimized the amount of re-work required at design. Although federal stakeholders have taken the aforementioned positive steps to improve CSP, not all of the issues with transparency and collaboration have been addressed, in particular: Key stakeholders were not clear on the eligibility of particular locations for a CSP project and how to suggest locations for consideration. Marshals Service headquarters officials told us that they have asked that certain court locations be considered for the CSP, but they have been denied. For example, headquarters Marshals Service officials told us that they requested that one particular federal courthouse be a part of the CSP, specifically to add certain features to improve circulation. This courthouse was not included as a potential CSP project location, and the Marshals Service moved forward with the design of the project. GSA officials told us that the judiciary developed a process for identifying CSP projects and subsequent studies resulted in a priority list of locations, and that this courthouse was not put forth by the judiciary to be studied. During the third round of project selections, that same courthouse was one of four locations removed by the judiciary during the internal judiciary meetings due to having Marshals Service-funded projects, or joint projects, but no Marshals Service officials participated in this discussion. Specifically, documentation from this internal meeting showed that this courthouse was removed due to a Marshals Service project that was already funded. However, local judicial, Marshals Service, and GSA officials told us when we visited that a circulation project was not planned for the location, and Marshals Service officials provided a document that showed that the project has not yet been funded, (project design has been funded). Although the judiciary removed this courthouse from consideration for a project, Marshals Service officials maintain that it could have been a CSP project. There continues to be a lack of clarity about how key deep-dive analysis factors were applied during the most recent round of project selection. For example, one of these factors (as conceptualized in round three) was the number of criminal defendants the courthouse processed. But there is no description of what number of defendants would be too low for a court to be considered further for a CSP project, and the reasons that some locations were removed for a low number of criminal defendants, while others with the same number were put forward, were not clearly documented. For example according to AOUSC documentation, a certain potential location was removed as a candidate during the latest round of CSP selections during internal judiciary meetings because it had zero criminal defendants. However, other locations were put through to the next round that also had zero criminal defendants. Marshals Service officials also expressed a transparency concern regarding CSP costs. Specifically, the additional costs they incur from CSP projects are not considered during project selection. According to Marshals Service officials, when the judiciary selects a CSP project, the Marshals Service must find funding for any Marshals Service security equipment needed to support the CSP projects. Marshals Service officials said that in the first year of project selections (2012), the corresponding Marshals Service costs were covered, but that since then, GSA has told them that they would need to provide the funding. Key stakeholders hold varying views about how collaborative the process to select CSP projects has been. AOUSC officials said they believe that CSP selection process has been collaborative and that no project was or is approved for CSP funding without the concurrence of the Marshals Service and GSA. However, Marshals Service officials said that they have not found the process of selecting projects to be collaborative, but rather, from their point of view, the CSP projects are selected by the judiciary based on its view of security concerns. GSA officials said that they were not involved in project selection during the first two rounds of CSP project selection, but that during the third round of project selections, the process was more collaborative. Similarly, FPS has generally not been included in the planning or execution of CSP projects. FPS was only included in CSP planning and implementation for one of six CSP project locations we visited, where the local Marshals Service sought FPS’s expertise in placement of security equipment. At these locations, CSP projects may alter the perimeter of the building and could affect FPS’s equipment. For example, local FPS officials in one location we visited said that they did not know about the impending CSP project until we notified them of our visit to tour the project site. They said they would need new security equipment and that they could possibly add their expertise to other aspects of the project early in the process to avert unnecessary costs if they were consulted on this CSP project. AOUSC officials told us that moving forward, FPS will be included in the CSP. As our prior work has shown, the interests of multiple—and often competing—stakeholders may not align with the most efficient use of government resources and can complicate decision making. Better transparency about how projects are selected, could help to ensure that the CSP is not subjected to competing stakeholder interests. Furthermore, as we have also reported, effective collaboration can help maximize performance and results, particularly for issues that cut across more than one agency, as is the case with courthouse security. CSP projects involve multiple stakeholders, and projects have multiple phases, so it can be difficult to ensure that all stakeholders fully understand all program procedures and are involved at the right time and to the right degree throughout the life of the project. An internal control for efficient and effective operations is to ensure that all transactions and other significant events are clearly documented in a manner that allows the documentation to be readily available for examination. With clearer documentation of the process shared with all stakeholders, transparency and collaboration could also be enhanced in the CSP. By developing approaches to provide stakeholders information that clearly describes how all selection criteria are to be applied, how to put forth a location for consideration, what specific costs are eligible for funding within a project, how collaboration is to occur during project selection and execution, and when and how to include all relevant agencies in each phase of the project, stakeholders could be better assured that they all have the same understanding of how the program is supposed to work, that the program is addressing the most urgent needs and that the expertise of all government stakeholders is being used to help ensure that the program is as efficient as possible. We have previously reported on coordination issues facing the security of courthouses. More specifically, in September 2011, we found that federal stakeholders faced issues, among others, related to implementing their roles and responsibilities, gathering and sharing comprehensive security information, and participating in security committees. At that time, we recommended that FPS and the Marshals Service, in conjunction with the judiciary and GSA, jointly lead an effort to update a 1997 MOA that outlines stakeholders’ roles and responsibilities. Implementation of this recommendation was key to addressing the issues in our view. Since then, FPS officials said that they took the initiative on updating the MOA, working with each party individually and sharing iterative updates based on comments. FPS officials said that they took the lead in this effort because they wanted to address the recommendation and no other agency was moving forward with it. However, despite these efforts, nearly 5 years after the recommendation, the updated MOA still has not been signed. FPS officials told us that the MOA had been set aside at different times since the recommendation was made, in part, due to staff turnover at each agency that in some instances resulted in major revisions to the draft that necessitated additional vetting. In addition, FPS officials said that lengthy reviews and issues coordinating schedules have also contributed to the delays. During our visits to CSP project locations as a part of this review and during discussions with AOUSC and local court officials, and headquarters and local Marshals Service, GSA, and FPS officials, we found that the issues we identified in September 2011 persist. More specifically, these issues include those outlined below. We found that the Marshals Service’s and FPS’s roles and responsibilities have at times been fragmented. Some local FPS officials said that it can be difficult to determine what entity has responsibility for security equipment. For example, a local FPS official told us about a situation in one courthouse, where some security equipment is monitored by FPS and some is monitored by the Marshals Service. We found the same type of situation in another location and officials were unsure how that arrangement came to be. In addition, headquarters Marshals Service officials told us that overall there is fragmentation between FPS and the Marshals Service in ensuring that security equipment is operational. In one location FPS local officials also said that duplicative security efforts— such as when both the Marshals Service and have FPS have equipment in the same building or part of the building—can create confusion. We found that the level of coordination can be site-specific and personality-driven which can make executing roles and responsibilities difficult. For example, one local FPS official told us that FPS has a very strong relationship with the local Marshals Service officials and judges and are always included in court security meetings. However, at another courthouse, an FPS inspector did not complete all sections in the 2014 facility security assessment, noting that an individual with the Marshals Service would not answer all FPS questions during the interview and that FPS could not be sure that all security equipment was working because that individual would not permit the FPS inspector to conduct testing. FPS and the Marshals Service both have new staff in those roles and officials see the relationship improving; however, another facility security assessment will not be completed until 2017. Some local Marshals Service officials said that in certain locations, it is difficult and time consuming for FPS to execute their role of repairing equipment. For example, local Marshals Service officials at one location we visited told us about a recent problem they encountered regarding malfunctioning equipment at another location they serve. Initially, the Marshals Service security contractor assessed that it needed a small, inexpensive part, but the contractor could not fix it because the equipment was owned by FPS. Marshals Service officials said that after 60 days and reaching out through numerous calls and e-mails, FPS received the internal approvals to fix the equipment. We found that there continue to be issues associated with stakeholders gathering comprehensive information on security concerns and sharing the information gathered. As discussed earlier, the Marshals Service and FPS have some information on security concerns for individual courthouses, but cannot readily track the information across the portfolio and address risks to courthouses associated with that analysis, based on the way information is currently gathered. Further, information that agencies already collect is not readily shared with the other agencies. For example, AOUSC officials said that they have had difficulty getting facility security assessments from FPS and have been told by some FPS inspectors that AOUSC officials are not entitled to receive a copy of the assessment, as they are not tenants in the building. Further, FPS officials said that if AOUSC and the Marshals Service shared the information they collect on security concerns with FPS, they could coordinate more with those agencies, but FPS does not routinely have access to information collected by these agencies. Without sharing existing information on security concerns, federal stakeholders do not have complete information to help them look for strategic ways to achieve efficiencies and to address the risks to federal courthouses more comprehensively. Further, if the agencies worked together to gather and understand all of the available security information, they could better understand what information is not collected at a portfolio level, and work on a coordinated strategy to obtain needed information efficiently. GSA officials told us that if more comprehensive information was available and shared regarding deficiencies in courthouses, federal agencies could develop joint acquisition strategies to address widespread deficiencies more efficiently. For example, if FPS develops the capability to track the status of countermeasure recommendations across courthouses in an automated way, as discussed earlier, and the results show that a particular countermeasure is recommended often, but rarely accepted because is it cost-prohibitive, federal agencies could leverage the buying power of the federal government to drive down the cost of the countermeasure. Of the eight locations we visited, three did not have an active facility security committee even though they have other federal tenants in the building. In such buildings, a facility security committee is called for by the ISC standard. Headquarters Marshals Service officials told us that in their experience, facility security committee meetings, in reality, often do not reflect the facility security committee provisions in the ISC standard and that although addressing security needs ultimately falls upon the lead tenant of each facility (the facility security committee chair), there are no accountability mechanisms for ensuring these needs are addressed. FPS officials also said that there is currently no compliance mechanism for the ISC standard. Without attending these meetings, stakeholders involved in courthouse security may be missing opportunities to share information and coordinate so that security risks are better understood and addressed. In 2013, we found that the ISC did not formally monitor agencies’ compliance with ISC standards, but was planning an effort to do so. We found that GSA, AOUSC, the Marshals Service, and FPS did not routinely meet to address courthouse security problems at a national level where decision-making authority exists. For example, more than four and a half years passed before the four federal stakeholders met together in May 2016 to discuss the MOA updates at a national level, although FPS had been working on the update. A GSA official told us that when the four stakeholders did meet, the meeting was very productive. FPS officials said that though they considered assembling the larger group early on, they elected to elicit comments and revisions on a draft to galvanize the most substantive changes needed based on consultation with “key representatives.” AOUSC officials told us that they did not know why they had not met as a group prior to May 2016. In fact, Marshals Service and AOUSC officials said that there was no working group or forum where the four agencies could discuss issues relevant to courthouse security at the national level where decision-making authority exits. In our previous work, we identified interagency working groups as one of the collaboration mechanisms used by agencies to coordinate activities. National level coordination and cooperation in protecting critical infrastructure is a key policy emphasis of the federal government. The federal government has prioritized the protection of federal facilities through directives to address the changing nature of threats to federal facilities, including federal courthouses. Through these documents, the federal government has consistently presented a common vision for critical infrastructure protection: agencies involved in the security of federal facilities should work together cooperatively to provide security to our critical infrastructures in an efficient manner that maximizes the federal government’s limited resources, for example: The National Infrastructure Protection Plan notes the importance of obtaining a shared vision with stakeholders with similar missions, saying that “for the critical infrastructure community, leadership involvement, open communication, and trusted relationships are essential elements to partnership.” The National Strategy for the Physical Protection of Critical Infrastructures and Key Assets states that protecting our critical infrastructures and key assets calls for a transition to a national cooperative approach across federal agencies. However, the continuing issues related to cooperation that continue to hinder effective courthouse security—in the areas of executing roles and responsibilities, collecting and sharing information on security concerns, and accountability for participating in coordination mechanisms like security committee meetings—illustrate that more could be done to align with the priorities that the federal government has established in these documents. Also, the delays in updating the MOA further illustrate that the cooperative approach described in the National Strategy has not been fully developed. Without a more cooperative approach to securing courthouses, such as through a working group or similar forum, challenges across the portfolio of federal courthouses will likely persist. The physical security of government assets is one of the most challenging aspects of real property management. In fact, one of the reasons that managing federal property is an area that we have designated as high risk is due to the challenges involved with protecting federal facilities. Under the Homeland Security Act of 2002, except for law enforcement and security related functions transferred to DHS, GSA has the responsibility to protect buildings it holds or leases. In its role as a steward of federal courthouses under its custody and control, and as part of its related protection responsibilities, GSA is well positioned to establish a working group or other forum of federal stakeholders to improve cooperative efforts. Securing our nation’s federal courthouses is complex and challenging and four federal stakeholders have a significant role—the judiciary, through its administrative arm, AOUSC; GSA; the Marshals Service; and FPS. Addressing courthouse security concerns begins with good information regarding the risks to each courthouse, but the federal government does not have this comprehensive information. The only portfolio-wide information that the federal government has is collected by AOUSC; however, these scores are only part of the story because comprehensive information related to security concerns identified by the Marshals Service and FPS is not currently used portfolio-wide. While both agencies have plans to enhance their processes, it is unclear whether these improvements will lead to the ability to assess security concerns across the portfolio of courthouses. With better portfolio-wide information from the Marshals Service and FPS, decision makers can be better equipped to make risk-informed decisions. Addressing courthouse security concerns can be a costly undertaking, especially in older courthouses that were not designed for modern day security threats, particularly with regard to meeting current standards that call for the separate circulation of judges, prisoners, and the public. The CSP was designed to be a less costly alternative to building new federal courthouses and provides a way to add key security features. Since 2012, the CSP has demonstrated the potential to address security problems for less than the cost of a new courthouse. Transparency and collaboration have improved, showing that the CSP is generally moving in the right direction, but some concerns remain. Stakeholders do not have the same understanding about how the CSP program works at key stages, including project selection, and on how collaboration will occur. While the CSP has the potential to address security concerns at courthouses that are selected for the program, issues persist related to cooperation and information sharing that we have found in the past. Creating greater cooperation—as the National Strategy suggests—to address courthouse security concerns can help GSA, AOUSC, the Marshals Service, and FPS to systemically identify risks, the resources needed to address those risks, and investment priorities when managing security at these facilities. This effort would involve all relevant stakeholders working together, having quality information to work with, and using it to manage risk and find efficiencies in their efforts. Without a coordinating mechanism at the national level, however, the four agencies are limited in their effectiveness in developing comprehensive approaches for addressing challenges that affect courthouse security. We recommend that the Attorney General instruct the Director of the Marshals Service to ensure that the improvements being made to the Marshals Service’s information on the security concerns of individual buildings allow the Marshals Service to understand the concerns across the portfolio. We recommend that the Secretary of Homeland Security instruct the Director of FPS to ensure that the agency develops the capability to track the status of recommended countermeasures across the courthouse portfolio, either through FPS’s planned software enhancement or other method. We recommend that the Administrator of GSA and the Director of the AOUSC, on behalf of the Judicial Conference of the United States, in conjunction with the Marshals Service and FPS, improve CSP documentation in order to improve transparency and collaboration in the CSP program. We recommend that the Administrator of GSA—in conjunction with AOUSC, the Marshals Service, and FPS—establish a national-level working group or similar forum, consisting of leadership designees with decision-making authority, to meet regularly to address courthouse security issues. We provided a draft of law enforcement sensitive/limited official use version of this report to the DOJ, DHS, GSA, and AOUSC for review and comment. In addition, DOJ and DHS conducted sensitivity reviews of the law enforcement sensitive/limited official use version of this report. As a result of these reviews, this public version of the report omits sensitive information including specific security concerns, the results of AOUSC’s security scores, and the names and locations of courthouses we visited or whose information we analyzed. In response to our request for comments on the law enforcement sensitive/limited official use version of this report, we received an e-mail from DOJ’s Audit Liaison Specialist which stated that DOJ was not providing written comments, but that DOJ agreed with our recommendation to ensure that the improvements being made to the Marshals Service information on the security concerns of individual buildings allows the Marshals Service to understand the concerns across the portfolio. After the law enforcement sensitive/limited official use version was issued, the Marshals Service provided additional information stating that it had several initiatives under way in response to this recommendation, including an approach to real property management that incorporates security, construction, and budget concerns across the portfolio. In addition, DOJ stated that the Marshals Service will work with AOUSC, FPS, and GSA to improve CSP documentation and will support and participate in a national-level working group regarding courthouse security issues. We have not yet evaluated this information to determine if it will address our concerns and recommendation. We also received written comments from DHS, GSA, and AOUSC, which are reproduced in full in appendixes II, III, and IV, respectively. DHS agreed with our recommendation to ensure that FPS develops the capability to track the status of recommended countermeasures across the portfolio. DHS noted that it appreciates our acknowledgement that security at federal courthouses is complex and involves multiple federal stakeholders with different roles and responsibilities. After the law enforcement sensitive/limited official use version of this report was issued, DHS provided additional information stating that it has included cross-portfolio tracking of existing and recommended countermeasures as part of a mission needs statement, with an acquisition decision to be made in 2017. We have not yet evaluated this information to determine if it will address our concerns and recommendation. GSA agreed with our recommendations to improve CSP documentation to improve transparency and collaboration and to establish a national- level working group or similar forum to meet regularly to address courthouse security concerns. In the comments, GSA noted that it will develop a comprehensive plan to address the recommendations and is confident that this plan will satisfactorily remedy concerns this report raises. After the law enforcement sensitive/limited official use version of this report was issued, GSA provided additional information stating that it plans to assist the judiciary in developing a statement of work for a CSP handbook and subsequently work with the judiciary, USMS, and FPS to develop the handbook. GSA also provided information stating that it plans to finalize the Courts Security Memorandum of Agreement between AOUSC, the Marshals Service, FPS, and GSA; and that it plans to develop a courthouse security working group charter. We have not yet evaluated this information to determine if it will address our concerns and recommendations. AOUSC agreed with our recommendation to improve CSP documentation in order to improve transparency and collaboration and discussed steps that AOUSC is already taking to address this recommendation. AOUSC stated that it has started to compile and document all relevant background, policy, and process information to provide a central resource for all stakeholders to use. Further, AOUSC stated that it plans to develop a handbook/guide for use by GSA, the Marshals Service, FPS, and other stakeholders detailing key aspects of the CSP selection process. AOUSC stated that this documentation will address our recommendation by making documentation readily available for examination by all stakeholders including descriptions of all selection criteria to be applied, how projects are identified, specific costs eligible for funding, and how collaboration will occur during project selection and execution. After the law enforcement sensitive/limited official use version of this report was issued, AOUSC provided additional information about actions it has taken in response to this recommendation, including implementing of a communications plan for all new CSP concept studies and ensuring that all stakeholders are included in CSP concept, design, and construction meetings. In addition, AOUSC stated that the judiciary is working with GSA to jointly develop a CSP handbook, which they plan to complete by the end of 2017. Further, AOUSC stated that all relevant stakeholders were invited to participate in a meeting GSA held to develop a courthouse security working group charter. We have not yet evaluated this information to determine if it will address our concerns and recommendation. All four agencies provided technical comments, which we incorporated as appropriate. We are sending copies to appropriate congressional committees, the Attorney General, Secretary of Homeland Security, Administrator of the General Services Administration, and Director of the Administrative Office of the U.S. Courts. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-2834 or rectanusl@gao.gov. GAO staff who made major contributions to this report are listed in appendix V. This report focuses on physical security concerns in federal courthouses. This report addresses the following questions: To what extent have federal stakeholders identified security concerns at federal courthouses? How has the Judiciary Capital Security Program (CSP) addressed courthouse security concerns and how, if at all, can the program be improved? What actions, if any, could federal agencies take to improve courthouse security? This report is a public version of a previously issued report identified by the Department of Homeland Security and the Department of Justice as containing information designated as law enforcement sensitive/limited official use, which must be protected from public disclosure. Therefore, this report omits sensitive information including specific security concerns, the results of the Administrative Office of the U.S. Court’s (AOUSC) security scores, and the names and locations of courthouses we visited or whose information we analyzed. The information provided in this report is more limited in scope, as it excludes such sensitive information, but it addresses the same questions that the law enforcement sensitive/limited official use report does, and the overall methodology used for both reports is the same. To determine the physical security concerns identified by the AOUSC, the U.S. Marshals Service (Marshals Service), and the Federal Protective Service (FPS), we reviewed and analyzed documents from these federal stakeholders, including capital-planning documents, security assessments, information on physical security concerns, and other reports, and interviewed AOUSC, Marshals Service, and FPS officials to understand how they each identify security concerns and what data they collect. We limited our scope to information collected by these federal stakeholders, and we did not independently determine what constitutes a physical security concern. Rather, we relied on these stakeholders to determine physical security concerns as defined in their own standards and guidance. As part of our review of these data, we assessed federal stakeholders’ documentation and written responses about data collection procedures and their views of the quality of the data. We analyzed AOUSC’s March 2016 security scores, but we did not analyze the scores of non-resident courthouses and bankruptcy-only courthouses due to differences in security requirements of the different court operations and facilities, differences that limit the comparability of security scores, leaving 267 courthouses for our analysis. We believe that AOUSC’s security scores, developed as part of the judiciary’s long-range capital-planning process, are sufficiently reliable for our purposes based on answers that AOUSC provided to our questions on data reliability. We also reviewed incident and threat data collected by the Marshals Service and FPS, but based on our assessment, we do not believe these data were sufficiently reliable for describing physical security concerns across courthouses. We based this conclusion primarily on interviews with the Marshals Service and FPS officials who both stated that there were significant limitations in these data. We reviewed the methods of collecting information by these federal stakeholders to determine whether it was used to understand security concerns portfolio-wide as defined in our risk management framework. To understand how the CSP has or will address physical security concerns, we visited eight courthouses, which we selected to cover six CSP projects at various stages of implementation (completed, under construction, and pre-construction) as well as two courthouses that were considered but not selected. For each of the six site visit locations that have had or will have a CSP project, we (1) toured the facility to observe security concerns and how these concerns were (or will be) addressed in a CSP project; (2) reviewed documentation including CSP concept plans, security assessments and scores, and other reports indicating security concerns; and (3) interviewed local officials from the General Services Administration (GSA), Marshals Service, and FPS as well as local judiciary officials, to obtain their views about physical security concerns prior to the projects and how these concerns have or will be addressed by the CSP, and about courthouse security concerns in general. We relied on officials to bring security issues to our attention at the individual courthouses we visited. While we visited six of the ten courthouses selected for a CSP project as of fiscal year 2016, the information we obtained from these site visits cannot be generalized across all CSP locations. However, this information does provide useful examples about a majority of CSP projects selected to date. We also selected two courthouses that were considered but not chosen for the CSP and appeared on AOUSC’s documentation of potential locations that was used to selected projects from fiscal year 2013 to fiscal year 2016. We selected these particular locations because they could be combined with our CSP site visits or were accessible to our field staff. As with the CSP site visits, we toured the facility, reviewed documentation on security concerns, and interviewed federal stakeholders, as discussed above. To understand how federal stakeholders have selected CSP projects and collaborated in planning and implementation efforts, we reviewed and analyzed these stakeholders’ documentation on the CSP, including project concepts and drawings, as well as AOUSC’s summaries of selection criteria, summaries of an interagency summit to improve the CSP, an agenda for an interagency meeting, and spreadsheets used to select projects. We also interviewed relevant officials about methods used to select project locations and collaborate with other stakeholders. We incorporated written and testimonial information from all stakeholders, as was appropriate and was relevant to the issues raised in our report. We compared federal stakeholders’ efforts to select projects and collaborate to criteria in our Standards for Internal Control in the Federal Government and the Office of Management and Budget’s (OMB) directive on open government. To assess other actions federal stakeholders could take to address courthouse security challenges, we examined relevant statutes, memorandums of agreement, and federal stakeholders’ policies and guidance pertaining to roles and responsibilities for physical security at federal courthouses, as well as our prior work regarding courthouse security, including GAO-11-857. Where they were available, we also reviewed the meeting agendas and minutes of the facility security committees for the courthouse locations we visited. We interviewed headquarters and local officials from the GSA, Marshals Service, and FPS and AOUSC officials and local judiciary officials to obtain their views about efforts to address courthouse security challenges. We compared federal stakeholders’ efforts to directives and authorities that have established the federal government’s vision for critical infrastructure protection, including the National Strategy for the Physical Protection of Critical Infrastructures and Key Assets, the National Infrastructure Protection Plan. We conducted this performance audit from June 2015 to February 2017, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Lori Rectanus, (202) 512-2834 or rectanusl@gao.gov. In addition to the contact named above, David Sausville (Assistant Director), Amy Higgins (Analyst in Charge), Geoffrey Hamilton, John Mingus, Kate Perl, Malika Rice, Amy Rosewarne, and Kelly Rubin made key contributions to this report.
The variety of civil and criminal cases tried in 400-plus federal courthouses can pose security risks. The CSP was started in 2012 and was designed to be a less costly alternative to building new federal courthouses by adding key security features to existing courthouses. Congress has provided $20 million in obligational authority for the program in each of the fiscal years that it has been funded. GAO was asked to review physical security at federal courthouses. This report discusses (1) the extent to which federal stakeholders have identified security concerns; (2) how the CSP addresses courthouse security concerns; and (3) what actions federal agencies could take, if any, to improve courthouse security. GAO reviewed agency documents, AOUSC security scores, and interviewed officials from the Marshals Service, FPS, GSA, and AOUSC. GAO also visited eight courthouses to include six locations selected for CSP projects, and two that were considered but not selected. Although these site visits cannot be generalized to all CSP project locations or all federal courthouses, they provide insight into federal agencies' practices to secure courthouses. Three federal agencies—the Administrative Office of the U.S. Courts (AOUSC), the U.S. Marshals Service (Marshals Service), and the Federal Protective Service (FPS)—collect information about security concerns at federal courthouses related to the agencies' respective missions. However, only AOUSC develops information that can be used to understand security concerns across the courthouse portfolio. In contrast, the Marshals Service and FPS collect information on security concerns on a building-by-building basis in varied ways, but the manner in which the information is collected prevents it from being used to understand portfolio-wide security concerns. This is inconsistent with GAO's risk management framework. Both agencies are taking steps to improve their information, but it is not clear whether these improvements will provide the portfolio-wide information stakeholders need to make risk-informed decisions. The General Services Administration (GSA) has initiated 11 projects at 10 courthouse locations nationwide, as part of its Judiciary Capital Security Program (CSP); two projects have been completed. Local officials said that these projects have already improved or will improve security at the selected courthouses once completed. CSP improvements have been aimed at separating the paths of judges, prisoners, and the public, so that trial participants only meet in the courtroom. Transparency and collaboration issues have emerged among federal stakeholders as the program has been implemented. For example, not all key stakeholders GAO spoke to were clear on the eligibility of specific locations for CSP projects and varied in their views about how collaborative the process to select CSP projects has been. Although stakeholders have taken some steps to improve CSP transparency and collaboration as the program has evolved, some issues remain. Taking additional steps to improve documentation of decision-making and sharing this document with stakeholders could further enhance transparency and collaboration and better assure that all of the agencies and policy makers have the same understanding of how the program is supposed to work, that it is addressing the most urgent courthouse security needs, and that the expertise of all stakeholders is being used to ensure program efficiency. GAO found that agencies could take additional actions to enhance security at federal courthouses by addressing a related GAO open recommendation, and establishing a formal mechanism such as a working group or forum to enhance coordination and information sharing. Specifically, in 2011, GAO recommended that the agencies update a 1997 memorandum of agreement to clarify their roles and responsibilities. This action has not been done although FPS has taken some steps to start the process. In addition, GAO found that GSA, AOUSC, the Marshals Service, and FPS had not routinely met to address courthouse security issues at a national level where decision-making authority exists. This lack of a formal meeting mechanism inhibits their ability to communicate regularly about their roles and responsibilities and share information about security concerns. This is a public version of a law enforcement sensitive/limited official use report issued in October 2016. GAO recommends that (1) the Marshals Service and FPS improve the courthouse security information they collect; (2) GSA and AOUSC improve the CSP's transparency and collaboration through better documentation; and (3) GSA establish a working group or other forum to enhance coordination. The agencies concurred with GAO's recommendations.
The federal government owns onshore mineral resources, including oil and gas, under about 700 million acres of land. These resources are located below the surface land—known as the subsurface. While the federal government owns all or part of the mineral resources in the subsurface, it does not necessarily own the surface land. Of the 700 million acres of federal mineral resources, the surface and subsurface ownership on 57 million acres is “split” between private parties or state governments, which own the surface area, and the federal government, which owns the subsurface area—referred to as “split estate” land. The BLM manages the federal mineral resources contained in the subsurface of about 700 million acres. It also manages 261 million acres of the surface areas of the 700 million acres for such purposes as grazing, recreation, and timber harvesting. BLM, headed by the BLM director, manages public lands under its jurisdiction through 12 state offices, headed by state directors, with each state office having several subsidiary field offices, headed by field office managers. The balance of the federal surface land is managed by other federal agencies such as the Forest Service. Figure 1 shows the subsurface mineral resources managed by BLM, and surface managed by BLM, Forest Service, other federal agencies, or owned by private parties or state governments. The Forest Service and BLM both have roles in managing oil and gas resources on national forest system land. Although BLM has the major role in issuing oil and gas leases and permits on national forest system land, the Forest Service is responsible for determining what land is available for leasing and under what conditions. Once leases are issued, the Forest Service regulates all surface-disturbing activities conducted under the lease. The Forest Service manages its programs through nine regional offices, 155 national forests, 20 grasslands, and over 600 ranger districts (each forest has several districts). The Forest Service Chief oversees the agency, whereas regional foresters oversee regional offices, forest supervisors oversee national forests, and district rangers oversee district offices. BLM assists BIA in fulfilling the trust responsibilities of the United States by assisting Indian tribes and individual Native Americans in managing about 56 million acres of Indian land for oil and gas development. Indian land principally consists of lands within Indian reservations, lands owned by Indian tribes, and Indian allotments. BIA administers its programs through the BIA director, 12 regional offices, headed by regional directors, and over 80 agency offices, headed by agency superintendents. MMS manages oil and gas development for offshore mineral resources on the outer continental shelf through three administrative regions: Gulf of Mexico, Alaska, and Pacific. The MMS director heads the agency and regional managers head the regions. District offices support the regional offices and are headed by district managers. The federal outer continental shelf is an area extending from 3 to 9 nautical miles, depending on the location, to about 200 nautical miles off the United States coast. Over 610 million acres of the outer continental shelf is closed to future oil and gas development due to legislative and Presidential moratoria. Figure 2 shows MMS administrative regions and the areas open or closed to oil and gas development. Several statutes, including the National Environmental Policy Act (NEPA), and regulations govern oil and gas development on federal and Indian land. NEPA requires BLM, Forest Service, BIA, and MMS, and all other federal agencies, to assess and report on the likely environmental impacts of any land management activities they propose or approve that significantly affect environmental quality. Specifically, if a proposed activity, such as oil and gas development, is expected to significantly impact the environment, the agency is required to prepare an environmental impact statement. When an agency is not sure whether an activity will have significant impact on the environment, the agency prepares an intermediate-level analysis called an environmental assessment. If an environmental assessment determines that the activity will significantly affect the environment, the agency then prepares an environmental impact statement. Agencies also identify certain categories of actions that normally do not significantly impact the environment, and which are excluded from preparation of an environmental impact statement or environmental assessment—referred to as categorical exclusions. BLM, Forest Service, BIA, and MMS each have similar processes for managing oil and gas activity on land within their jurisdiction. Generally, these processes center around four stages—planning, exploration, leasing and operations. During the planning stage, agencies develop land-use plans, revisions, and amendments, delineating where and under what conditions oil and gas activities can take place on federal land managed by each agency. To develop land-use plans, agencies use a multistep process, which generally includes preparation of environmental analyses under NEPA. Once land-use plans allowing oil and gas activities are finalized, oil and gas development companies may perform exploration activities such as geophysical exploration. Geophysical exploration activities can occur before or after the leasing stage. Development companies must obtain approval from BLM for geophysical exploration on land managed by BLM and from the Forest Service on land managed by the Forest Service. BIA may approve permits and agreements between Indian tribes or individual Native Americans and oil and gas development companies for geophysical exploration on Indian land. MMS must approve exploration activity on the outer continental shelf. BLM and MMS have the primary role in the leasing stage of federal oil and gas resource development. After a land-use plan, revision or amendment is completed, development companies nominate land they are interested in leasing. Onshore and offshore leases are competitively bid on at lease sales held by BLM state offices and MMS regional offices several times throughout the year, if lands are available. BLM is required to post a lease sale notice containing land parcels available for lease at least 45 days before it holds a competitive lease sale; MMS posts a notice at least 30 days before the offshore lease sale. BLM issues leases for onshore land, and MMS issues offshore leases. Indian tribes have the option to negotiate oil and gas leases individually, or to hold competitive lease sales. BIA must approve oil and gas leases and negotiated agreements affecting Indian land. BLM and MMS have the primary role in managing drilling activity for federal oil and gas resources and the Forest Service regulates surface activities on national forest system land. Once BLM and MMS issue oil and gas leases, development companies must obtain approval for drilling operations. For onshore activity, development companies submit development plans and applications for drilling permits to BLM for approval. On national forest system land, the Forest Service must approve all surface-disturbing activities—called a surface-use plan— before BLM approves applications for drilling permits. BLM also approves applications for drilling permits on Indian land after consulting with BIA. For offshore development activity, MMS approves development plans and applications for drilling permits. Decisions by BLM, Forest Service, BIA, and MMS can be challenged during the four stages of oil and gas development—planning, exploration, leasing, and operations. However, each agency differs in how challenges can be made at the various stages. The public may pursue a number of avenues to challenge agency decisions, depending on the type and nature of the underlying decision. For example, BLM planning decisions can be protested to the BLM director prior to challenging the decision in federal court, while Forest Service planning decisions can be appealed to the next highest officer prior to any challenge of the decision that might be brought in federal court. Table 1 summarizes procedures for public challenges during each stage of oil and gas development. During each of the four stages of oil and gas development, the public can make one or more of the following types of challenges to BLM decisions: protests, requests for state director review, appeals, and litigation. Through protests and requests for state director review, challengers essentially ask BLM to reconsider a decision. An appeal is a request to the Interior Board of Land Appeals (IBLA)—a body of administrative judges within the Department of the Interior—to review a BLM decision. In this report we use the term “litigation” to mean a challenge to an agency or departmental decision that is brought in federal court. At the planning stage, the public can challenge BLM decisions through protests and litigation. Protests to land-use plans or their amendments or revisions are submitted to the BLM Director and must be filed within 30 days of the published a proposed land-use plan.The BLM director has no specific deadline to respond to protests; but must “promptly” provide a written decision with a statement of supportive reasons. The director’s decision cannot be appealed to IBLA, but can be challenged in federal court. The duration of a court case depends on the facts and circumstances of each case. The public can challenge agency decisions to approve geophysical exploration activities to IBLA and in federal court.Once a BLM field office issues a decision approving geophysical exploration activities, the public can appeal the decision to IBLA within 30 days or challenge the decision in federal court.Following approval, a development company can commence geophysical exploration activities unless the challenger asks IBLA to halt or “stay” the activities, or asks a federal court to issue an injunction prohibiting the activity, and IBLA or federal court grants the request. IBLA has 45 days following expiration of the 30-day appeal period to render a decision on a stay request. IBLA has no deadline to respond to appeals. IBLA decisions pertaining to geophysical exploration activities can be litigated in federal court. The duration of court cases and the length of any injunctions that may be issued depend on the facts and circumstances of each case. The public can challenge leasing decisions through protests, appeals to IBLA, and litigation. Challengers can protest the inclusion of individual land parcels in a lease sale; such protests must be filed with the relevant BLM State Director during the 45-day notice period that precedes the lease sale. In some cases, the state director may not be able to decide the protest before the lease sale. However, if BLM receives a protest on any parcel included in the lease sale, the protest must be resolved before issuing a lease on the affected parcel. BLM is required to issue leases to the highest bidder within 60 days of receiving full payment for the lease and the first year's annual rent. According to agency officials, however, BLM sometimes fails to do so because it may not have resolved pending protests within the 60-day time period. The public can appeal BLM’s decision to issue a lease to IBLA within 30 days or challenge the decision in federal court. A leaseholder can seek approval for development activities unless a challenger appeals the decision to issue the lease to IBLA and asks IBLA or a federal court to halt or “stay” the activities. IBLA has 45 days following expiration of the 30-day appeal period to render a decision on a stay request. At the operations stage, the public can challenge BLM decisions to approve oil and gas drilling through requests for state director review, appeals to IBLA, and litigation. The public may ask the state director to review a decision to approve oil and gas development projects or individual drilling permits within 20 business days of the decision, and the state director must render a decision on the request within 10 business days. The public can appeal the state director’s decision to IBLA and can challenge the department’s decision in federal court.Development companies can begin drilling activity once a state director approves a drilling permit following review. A challenger may attempt to halt drilling activity by requesting a stay from the state director or IBLA, or seek an injunction in federal court. The public can challenge Forest Service decisions either through appeals or litigation during each stage of oil and gas development. Through an appeal, the public asks the Forest Service to review a decision. During the planning stage, the public has either 45 or 90 days to appeal planning decisions approving, amending or revising land use plans which may identify lands as available for leasing. Decisions are appealed to the next highest officer. For instance, a regional forester’s decision to approve a land use plan, amendment, or revision can be appealed to the Chief. A Forest Service official has 160 days to render a decision on an appeal. Following the conclusion of the appeals process, land use plan decisions can sometimes be litigated in federal court. According to Forest Service officials, BLM normally participates in the process for developing those plans that include decisions to make areas available for oil and gas development. During the exploration and operations stages, the public may generally challenge Forest Service decisions approving or disapproving of these actions under the agency's project appeals procedures.Specifically, these decisions include those involving (1) approving geophysical exploration activity on national forest system lands; and (2) the approval of surface use plans related to proposed drilling operations on national forest system lands. The Forest Service's appeals procedures generally apply to decisions for which the agency prepared an environmental impact statement or environmental assessment under NEPA. The public can appeal Forest Service decisions, other than planning decisions, to the next highest officer within 45 days of the decision. If an appeal is filed, the Forest Service has 45 days from the close of the appeal period to determine the outcome of the appeal.Following the conclusion of the appeal process, the agency decision can be litigated in federal court. Likewise, decisions that are not appealable can be litigated in federal court. Challengers can seek an injunction from federal court to halt activities while litigation is pending. If no appeal is filed, the Forest Service may implement the decision 5 business days after the appeal period closes. If an appeal is filed, implementation may occur 15 days following the appeal’s disposition. The public can challenge certain BIA decisions through appeals and litigation. Through an appeal, the public asks BIA to review decisions concerning oil and gas development on Indian land or asks the Interior Board of Indian Appeals (IBIA) to review a BIA appeal decision.The public can challenge IBIA decisions in federal court. BIA is not required to prepare land-use plans for Indian land, but can assist tribes in developing such plans. Because BIA does not approve land-use plans, there are no challengeable decisions at the planning stage. At the exploration stage, however, the public can challenge BIA decisions to approve permits to conduct geological and geophysical operations to assess whether oil and gas resources are present. The public must appeal a BIA official’s decision to the regional director—typically the official above the deciding official—within 30 days of the decision. After a decision is made on the appeal, the public has 30 days to file a separate appeal with IBIA. Following the appeal period, the operator can commence exploration activities unless the challenger requests a stay from IBIA. IBIA has 45 days from the expiration of the appeal period to render a decision on a stay request. If IBIA denies a stay, the operator can proceed with planned activities. IBIA decisions may be litigated in federal court. The duration of court cases and the length of any injunctions that may be issued are dependent on the facts and circumstances of each case. Likewise, at the leasing stage, the public can challenge BIA decisions to approve leasing agreements and mineral agreements between Indian tribes and Indian landowners and oil and gas development companies. The appeal and litigation process is the same as for the exploration stage. At the operations stage, BLM has agreed to approve drilling permits for BIA. Consequently, there are no BIA decisions for the public to challenge at this stage. However, the public can challenge BLM permit decisions through the BLM process. The public can challenge MMS oil and gas development decisions through requests for informal reviews within MMS, appeals to IBLA, and in federal court. Through informal review requests, the public asks the next highest officer to review a decision made by the official at the field office. Through an appeal, the public can ask IBLA to overturn an MMS decision. At the planning and leasing stages, MMS decisions involving its 5-year plan and lease sales are not subject to informal reviews or appeals to IBLA, but can be litigated in federal court. During the exploration and operations stages, the public can challenge exploration plans and permits, development and production plans, and applications for oil and gas drilling through informal reviews within MMS, appeals to IBLA, and in federal court. The public can appeal exploration or operations decisions to IBLA within 60 days. Within that period, the public may ask for informal resolution with the issuing officer’s next highest supervisor. During the 60-day appeal period, the development company can commence exploration or operation activities unless the challenger requests a stay from IBLA and IBLA grants the request. IBLA has 45 days from the expiration of the appeal period to render a decision on a stay request. IBLA has no time frame to decide appeals. Decisions of IBLA pertaining to exploration plans and permits, development and production plans, and applications for oil and gas drilling can be litigated in federal court. BLM headquarters does not systematically gather and use nationwide information on public challenges to manage its oil and gas program. While there is an agencywide system that state offices use to collect data on public challenges during leasing, it is not used to collect public challenge data during the planning, exploration, or operations stages. However, the system is used inconsistently because BLM has not issued clear guidance on which data the state offices are required to enter into the system. Because the agencywide system does not track all the public challenge data necessary for managing workload, headquarters and state offices also use multiple, independent data collection systems for the various stages of oil and gas development. These systems include paper files and electronic spreadsheets that are not integrated with one another or the agencywide system. BLM is in the process of developing a new national Lease Sale System that provides an opportunity to standardize collection of data on public challenges at the leasing stage. However, BLM has not decided whether the new system will track public challenge information. BLM’s nationwide system, Legacy Re-host 2000 (LR2000), has a component that state offices use to track limited public challenge information during the leasing stage but not during any of the other oil and gas development stages. State offices use the system inconsistently because BLM guidance on the use of the system to track oil and gas leasing data is unclear, leading to data gaps.According to BLM guidance, state offices have the option to begin recording data for a given parcel at any of three different points during the leasing stage: (1) prior to the posting of the competitive lease sale notice, (2) the day prior to the lease sale, or (3) after the lease sale. If state offices choose to start recording data at the third point—after the lease sale—the system will not capture public challenges on unsold parcels. For example, because the Wyoming State office begins recording data after the lease sale, the system does not capture public challenge data for unsold parcels, in that state office. Wyoming State office officials believe that recording information into the agencywide system prior to the lease sale creates added work and did not see any merit in tracking public challenges on parcels that are not leased. However, officials from a state office that tracks challenges for unsold parcels noted that doing so provides useful information for managing workload. Because the states are not consistent in entering data into the system, the data cannot be used by headquarters to track public challenges and to assess impacts on the workload of its state offices. According to officials at some state offices, the volume of public challenges at the leasing stage has increased over the past few years. However, BLM cannot readily provide nationwide data on the number of public challenges made. In addition, it cannot assess the extent to which such challenges affect the workload of its state offices, which is important to understanding what additional staffing and funding resources may be needed to process public challenges. BLM headquarters, field offices, and state offices use multiple, independent data collection systems to collect additional information that they need to track public challenge information at the various stages of oil and gas development. For example, during the planning stage, BLM headquarters tracks pending protests to land-use plans in a stand-alone spreadsheet and in case files. According to a BLM official, BLM headquarters tracks protest information so it can manage its workload in responding to protests. Once a challenge is resolved, information is deleted from the spreadsheet and the data are maintained only in case files and cannot be readily analyzed in aggregate. As a result, BLM cannot readily determine how many protests occurred year-to-year, who the protesters were, what the outcomes were, and the time frames for resolving the protests. Similarly, during the exploration stage, BLM field offices maintain case files on public challenges to geophysical exploration permits. According to a BLM official, the number of geophysical exploration permits issued is so low that it is unnecessary to aggregate information on public challenges to the permits. However, BLM did not have the data readily available for us to verify this condition. BLM state offices have developed their own systems for gathering public challenge data during the leasing and operations stages. During the leasing stage, BLM state offices use spreadsheets and paper files as well as LR2000 to track public challenges. The spreadsheets are not integrated with LR2000 or one another. BLM state offices use the information mostly to manage workload associated with protests, appeals, and litigation. Other uses include responding to information requests from protesters and potential leaseholders concerning the status of protests. During the operations stage, stand-alone spreadsheets and paper files are the primary methods state offices use to collect public challenge information. As in the leasing stage, this information is gathered to manage workload associated with responding to public challenges. It is also used to respond to information requests from challengers concerning the status of their challenges and from permit-holders on whether they can begin operations such as road construction and drilling. BLM headquarters does not have ready access to the public challenge data gathered by state offices in stand-alone electronic spreadsheets or paper files. As a result, similar to the planning stage, BLM headquarters cannot readily determine from year-to-year how many public challenges occurred, including protests; appeals and litigation; who the challengers were; what the outcomes were; whether the challenges affected split estate land; and the time frames for resolving the challenges.To obtain such information, headquarters must make individual, resource-intensive data calls to state offices. In one instance in June 2004, BLM headquarters requested information from state offices on their backlogs of protest decisions and the affected acreage at the leasing stage. According to the BLM official, the state offices responded in a couple of weeks, and the data indicated that some state offices had a backlog in issuing protest decisions. BLM is developing a system called the national Lease Sale System that is being designed to automate its leasing process and standardize data entered into LR2000. The Lease Sale System will replace five separate state office systems. This system is being developed because BLM recognizes that “there is a high degree of variability” in the extent to which the five systems can assist BLM state offices in managing the leasing process. In addition, according to BLM justification documentation for the Lease Sale System “all of the processes and support systems currently in place involve multiple data entry along with intricate data manipulations and data handoffs that open the processes to errors and inefficiencies.” According to BLM headquarters officials, the Lease Sale System, along with LR2000, could be used to gather public challenge data at the leasing stage, and BLM officials are in the process of determining whether to include public challenge data in the Lease Sale System. According to BLM officials, some state offices are reluctant to abandon their current leasing systems and methods of gathering public challenge data, and a consensus has not yet been reached concerning what information should be included in Lease Sale System, including public challenge data. According to data provided by MMS officials, during fiscal years 1999 through 2003, MMS was challenged on only one of its 1,631 decisions approving offshore oil and gas development and production and only one of its 1,997 decisions approving oil and gas exploration. Both of the challenged MMS decisions concerned access to mineral resources on the outer continental shelf off the coast of Alaska. In September of 1999, MMS’ Alaska regional office approved a development company’s plan to develop and produce oil off the northern coast of Alaska. Several Alaskans and an environmental interest group challenged the plan by filing a lawsuit in federal appeals court. MMS’ decision to approve the plan was challenged on the grounds that MMS did not comply with the requirements of NEPA and the Oil Pollution Act. In September 2001, the court ruled against the challengers. In February 2002, MMS’ Alaska regional office approved an operator’s plan to conduct exploration activities off the coast of Alaska. A Native American tribe in Alaska and three tribal members challenged the regional office’s decision to the IBLA in May 2002 on the grounds that MMS did not comply with the requirements of NEPA and the Administrative Procedure Act. IBLA denied the challengers’ requests for a stay and the operator commenced exploration activities while IBLA considered the appeal. Prior to IBLA’s appeal decision, the operator halted activities and, in July 2003, relinquished the lease. For the period we examined, MMS reported no lawsuits challenging its 5- year offshore management plan or the land parcels included in its 13 lease sales. MMS also reported that there were no challenges to the 2,850 drilling permits it issued. Table 2 shows the number of exploration and operations decisions approved by MMS between 1999 and 2003 and the number that were challenged by the public. Existing laws, regulations, and agency procedures allow multiple opportunities for the public to challenge decisions made by BLM, Forest Service, BIA, and MMS during the four stages of the oil and gas development process. While BLM is the primary agency approving oil and gas activity on federal land, it cannot readily provide nationwide data on the number of public challenges made. Consequently, it cannot assess the extent to which such challenges affect the workload of its state offices, which is important to understanding what additional staffing and funding resources may be needed to process public challenges. Although each state office gathers its own data on public challenges to manage workload, the data are not kept in a standardized format, and is not easily accessible. As a result, BLM headquarters must rely on resource intensive data calls to determine whether its state offices are experiencing backlogs of protested decisions. The new agencywide system that BLM is developing will provide an opportunity for the agency to maintain public challenge data in a standardized format at least for the leasing stage and provide it with more reliable data from which to make resource allocation decisions, but the agency has not yet determined whether it will include public challenge data in the system. We believe that including public challenge data into the new system should, at a minimum allow BLM headquarters easier access to public challenge data and provide information that will help it better manage workload impacts on its state offices from public challenges. To standardize the collection of public challenge data at the leasing stage for onshore federal lands, we recommend the Secretary of the Interior direct BLM to take the following two actions: Include public challenge data in the new agencywide automated system for selling leases. Issue clear guidance on how public challenge data should be entered into the new system. We provided a draft of this report to the Secretaries of the Interior and Agriculture for review and comment. In commenting on our recommendation for BLM to include public challenge data in its new agencywide system for lease sales, Interior wanted to ensure that the recommendation only applied to the leasing stage and not other stages of oil and gas development, such as land use planning, geophysical exploration, drilling, and reclamation. It further said the new national Lease Sale System will be designed to track public challenge data on oil and gas lease sales, and the BLM is developing a timeline for developing and deploying the new system. Our recommendation is directed to collecting data at the leasing stage and is not intended for other stages of oil and gas development. Interior did not comment on our second recommendation that BLM issue clear guidance on entering public challenge data into the new system. The Department of Agriculture stated that the report is complete and accurate and provides a good summary of the complex process that BLM and the Forest Service use to jointly manage and make decisions concerning the oil and gas programs, and appeals related to agency decisions. Both the Interior and Agriculture provided us with technical comments and editorial suggestions. We have made corrections to the report to reflect these comments, as appropriate. As arranged with your office, unless you publicly announce the contents earlier, we plan no further distribution of this report until 30 days after the date of this letter. At that time, we will send copies of this report to other interested congressional committees. We will also send copies of this report to the Secretaries of Agriculture and the Interior, the Chief of the Forest Service, the director of BLM, the director of BIA, and the director of MMS. We will make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have questions about this report, please contact me at (202) 512-3841. Key contributors to this report are listed in appendix III. This appendix presents the scope and methodology we used to gather information on the stages when agency decisions about oil and gas development can be challenged by the public and the extent to which the Bureau of Land Management gathers and uses public challenge data to manage its onshore oil and gas program. It also addresses the number of Minerals Management Service offshore oil and gas development decisions that were challenged, who challenged them, and the grounds, time frames, and outcomes of the challenges for fiscal years 1999-2003. To describe the stages when oil and gas development decisions can be challenged by the public, we analyzed pertinent laws, rules, and regulations and interviewed agency officials pertaining to oil and gas development processes under the jurisdiction of the Bureau of Land Management (BLM), Bureau of Indian Affairs (BIA), and Minerals Management Service (MMS) in the Department of the Interior and Forest Service in the Department of Agriculture. This included a review of statutes including the Federal Land Policy Management Act, Minerals Leasing Act, National Forest Management Act, Omnibus Indian Mineral Leasing Act, Allotted Mineral Leasing Act, Submerged Lands Act, Outer Continental Shelf Lands Act, National Environmental Policy Act, and associated amendments and regulations. From our analysis of these documents, we determined the administrative procedures the agencies use to manage oil and gas development on federal lands. We also identified the primary stages when the public can challenge oil and gas development decisions—planning, exploration, leasing, and operations and the types of challenges that can occur (e.g. protests, appeals, and litigation) during each of these stages. We interviewed BLM, BIA, MMS, and Forest Service officials in their respective headquarters, regional, and field offices and in the Department of the Interior’s Solicitor’s Office to discuss the application of the laws and regulations and to enhance our understanding of them. To determine the extent to which BLM gathers and uses data on public challenges to manage its onshore oil and gas program, we identified through discussions with BLM headquarters and state office officials the various management information systems and databases the agency maintains for managing the oil and gas program. We collected and analyzed pertinent manuals, handbooks, memorandums, spreadsheets, and procedures to ascertain the extent that BLM gathers and records public challenge data on the oil and gas program. We interviewed BLM headquarters officials to determine what, if any, public challenge data they gathered on a national level for managing the oil and gas program. We also interviewed officials from BLM’s state offices in California, Colorado, Eastern States, New Mexico, Utah, and Wyoming to determine how public challenge data is gathered and used at the state office level and to ascertain how these offices used the agencywide systems for recording such data. We visited the Eastern States office, which has jurisdiction over the 31 states east of the Mississippi River, and the Colorado, New Mexico, Utah, and Wyoming state offices, which, according to BLM headquarters officials, are state offices with a higher volume of oil and gas development activity. To determine for fiscal years 1999 through 2003 the number of offshore oil and gas development decisions by the MMS that were challenged, who challenged them, and the grounds, time frames, and outcomes of the challenges, we performed the following steps. We interviewed MMS headquarters officials to determine the number of planning decisions and lease sales held during fiscal years 1999 through 2003. We also analyzed information in MMS’ Technical Information Management System (TIMS) to identify the number of exploration and operations plans and revisions to plans the MMS approved from fiscal years 1999 through 2003. We reviewed the procedures governing data entry into TIMS to test the reliability of the data provided. To determine the number of public challenges to MMS’ decisions, we interviewed officials at MMS headquarters and its three regional offices: Gulf of Mexico, Pacific and the Alaska regional offices. Officials from the Alaska regional office indicated that they had two public challenges during this time period. Neither headquarters nor the other regions reported any other public challenges. We collected and reviewed the case files for the two challenged decisions to identify who challenged the decisions, the basis for the challenge, when the challenges occurred, and their outcomes. We also analyzed records at the Interior Board of Land Appeals and legal briefs provided by MMS Alaska region on these two challenges. We conducted our work from November 2003 to October 2004 in accordance with generally accepted government auditing standards. In addition to those named above, Laura Helm, R. Denton Herring, Richard Johnson, Cynthia Norris, Matthew Reinhart, Patrick Sigl, and Walter Vance made key contributions to this report.
U.S. consumption of oil and natural gas increasingly outpaces domestic production, a gap that is expected to grow rapidly over the next 20 years. There has been increasing concern about U.S. reliance on foreign energy sources. One option being considered is to increase domestic production of resources on land under the jurisdiction of the Department of the Interior's Bureau of Land Management (BLM), Bureau of Indian Affairs (BIA) and Minerals Management Service (MMS) and the Department of Agriculture's Forest Service. GAO determined (1) the stages when agency decisions about oil and gas development can be challenged by the public, (2) the extent to which BLM gathers and uses public challenge data to manage its oil and gas program, and (3) for fiscal years 1999-2003, the number of MMS offshore development decisions that were challenged. At the four stages of developing oil and gas resources--planning, exploration, leasing, and operations, BLM, the Forest Service, BIA, and MMS allow for public challenges to agency decisions. However, the agencies have different procedures for processing challenges that occur within the stages. For example, BLM leasing decisions can be challenged to a BLM state director, further appealed to the Interior Board of Land Appeals (IBLA), and litigated in federal court. Forest Service leasing decisions, however, sometimes can be appealed through the Forest Service supervisory chain of command and litigated in federal court. The Forest Service has no separate appeals board within the Department of Agriculture, such as IBLA, to review decisions. In addition, unlike BLM, the Forest Service has specific time frames during which appeals must be decided. BIA procedures offer opportunities for public challenges at the exploration and leasing stages, which are the only stages BIA makes decisions related to oil and gas development. MMS regulations do not provide for appeals at the planning or leasing stages, but do provide for appeals to IBLA during the exploration and operations stages. All MMS decisions could potentially be litigated in federal court. BLM does not systematically gather and use nationwide information on public challenges to manage its oil and gas program. BLM has a system that state offices use to collect data on public challenges during leasing, but the state offices use it inconsistently because they lack clear guidance from headquarters on which data to enter. As a result, the system does not provide consistent information that BLM headquarters can use to assess workload impacts on its state offices and to make staffing and funding resource allocation decisions. Because this system does not track all the public challenge data necessary for managing workload, headquarters and state offices also use multiple, independent data collection systems that are not integrated with one another or BLM's system. BLM is in the process of developing a new system that provides an opportunity to standardize collection of data on public challenges at the leasing stage. However, it has not decided whether the new system will be used to track public challenge information. Between fiscal years 1999 and 2003, MMS was challenged on only one of its 1,631 decisions approving offshore oil and gas development and production and only one of its 1,997 decisions approving offshore oil and gas exploration. Both decisions concerned land on the outer continental shelf off the coast of Alaska and were challenged by Alaskans, a Native American tribe, or an environmental interest group on the basis that the decisions violated the National Environmental Policy Act and other laws. One of the decisions was litigated in federal court and the court decided against the challenges. The other decision was appealed to IBLA but the company discontinued work before a decision was reached.
The 2005 Act made access to safe water and sanitation a U.S. foreign policy objective. Specifically, the act aimed to promote the provision of access to safe water and sanitation to countries, locales, and people with greatest need, including the very poor, women, and other vulnerable populations. Congress passed the 2014 Act to strengthen the 2005 Act by improving the capacity of the U.S. government to implement, monitor, and evaluate programs that increase access to safe water, sanitation, and hygiene. The 2014 Act requires, among other things, that USAID ensure that WASH projects are designed to promote maximum impact and long-term sustainability. The 2014 Act also calls for rigorous monitoring and evaluation to assess improvements in WASH. State’s Office of Conservation and Water within the Bureau of Oceans and International Environmental and Scientific Affairs is responsible for the development and implementation of U.S. foreign policy on international water and sanitation assistance. USAID’s Office of Water within the Bureau for Economic Growth, Education and Environment is responsible for coordinating, managing, and overseeing USAID’s response to water policy initiatives, including the 2014 Act. Annual appropriations acts funding foreign operations have established a spending requirement for WASH. These spending requirements have ranged from $315 million to $365 million since 2012. USAID and State have provided annual guidance to USAID missions regarding activities for which they may attribute funding to the annual spending requirement. The guidance notes that for attribution, proposed activities must be able to demonstrate an impact on water supply, sanitation, or hygiene through objectively verifiable indicators to measure progress. The 2005 Act required State to submit an annual report to Congress detailing the status of WASH efforts; these reports included information from USAID about funds allocated to WASH activities to meet the spending requirements for WASH. This reporting requirement was repealed by the 2014 Act. Improving health outcomes through the provision of sustainable WASH is the first strategic objective of USAID’s Water and Development Strategy, 2013-2018 (Water Strategy). To achieve this objective, the Water Strategy includes three key WASH-related goals: Increase first-time and improved access to sustainable water supply. First-time access refers to access to an improved water source that is gained by previously unserved populations. Improved access refers to enhancing existing access to, and the quality of, an already improved water supply. Increase first-time and improved access to sustainable sanitation. First-time access to improved sanitation generally refers to access to a pit latrine with a slab, septic system, or similar type of improved sanitary facility. Improved access to sanitation generally refers to improvement of an existing sanitation facility. Increase adoption of key hygiene behaviors. The Water Strategy recommends the promotion of three hygiene practices with the greatest demonstrated impact on health: (1) hand washing with soap at critical times; (2) safe disposal and management of excreta; and (3) improving household water storage, handling, and treatment. Figure 1 shows USAID’s definitions for improved and unimproved drinking water and sanitation. In March 2014, USAID issued the Water Strategy Implementation Field Guide (Field Guide) as a tool to help missions understand and implement the Water Strategy. The Field Guide requires that USAID missions track progress toward the three WASH-related goals using standard indicators and some custom indicators, as shown in table 1. State’s annual reports to Congress detailing the status of WASH projects have included country-level results for two of the standard WASH indicators: (1) number of people gaining access to an improved drinking water source and (2) number of people gaining access to an improved sanitation facility. This information has also been reported in USAID’s annual report on international water-related assistance, Safeguarding the World’s Water. The Water Strategy projected that during the next 5 years, at least 10 million persons would receive sustainable access to improved water supply and 6 million persons would receive sustainable access to improved sanitation. These projections included persons receiving first- time and improved access to water supply and sanitation. According to the Water Strategy and the Field Guide, to achieve the greatest impact, WASH projects should include the following elements: expanded access to “hardware” (e.g., water and sanitation infrastructure and hygiene commodities); required “software” activities to promote behavior changes for sustained improvements in water and sanitation access/service and hygiene practices; and an improved enabling policy and institutional environment, including strengthened financial frameworks and public-private partnerships for WASH. The Field Guide states that the level of effort in each area may vary depending on local context and other factors. In addition, the Water Strategy and the Field Guide highlight gender issues in the water sector as a key focus. They note that the burden of inadequate access to water and sanitation often falls heavily on women and girls and that WASH activities should promote gender equality and female empowerment to address the needs and opportunities of both men and women. The Water Strategy emphasizes the importance of sustainable WASH services. USAID policy defines sustainable services as public services in which host country partners and beneficiaries take ownership of development processes and maintain project results beyond the life of a USAID project. The strategy notes that the pillars of sustainability for water projects include integrated water resource management, sound governance, and appropriate environmental design, among other factors. In particular, regarding projects to provide first-time and improved access to quality water supply services, the strategy states that methods for ensuring the sustainability of water quality in the long term should be incorporated into project design and that this may include developing monitoring systems to ensure that water quality and supply are sustained at acceptable levels. Further, regarding sanitation services, the strategy notes that it supports the development and testing of improved, low-cost sanitation and waste management technologies, as well as innovative management and financing approaches, to ensure sustainability and to facilitate more rapid expansion of basic sanitation solutions. In fiscal year 2014, USAID designated 22 countries as tier 1 or tier 2 priorities for WASH assistance on the basis of their need and the opportunity to achieve significant impact. The 6 tier 1 countries are those where USAID found an opportunity to have a transformative impact on national-level policies and to leverage host country resources for the development and implementation of assistance. The 16 tier 2 countries are those where USAID determined that relatively small investment levels were likely to generate a significant impact in at least one dimension of WASH. USAID reported attributing about $463.1 million to the annual spending requirements for WASH in fiscal years 2012 through 2014 for these priority countries. Figure 2 shows the locations of the tier 1 and 2 priority countries. In the nine countries we selected for our review, USAID missions reported $214.5 million in WASH funding for 74 activities, with 6 key focus areas. Each mission’s total funding for WASH activities in these nine countries ranged from about $53.4 million, in Indonesia, to about $4.4 million, in Haiti. Missions reported implementing from as many as 19 WASH activities, in Kenya, to as few as 3 activities, in both Tanzania and Zambia. The most frequent key focus areas for these activities were capacity building and behavior-change communication, followed by infrastructure construction, technical assistance, policy and governance, and financing. Figure 3 provides additional detail about the activities implemented by each selected mission in fiscal years 2012 through 2014. Table 2 describes the 74 activities’ key focus areas and provides examples of the types of WASH activities in each focus area for the nine selected countries. Missions noted that some of the 74 reported activities included WASH as one of several other components, such as maternal-child health, nutrition, or natural resources management. For example, the Senegal mission’s Yaajeende Agriculture activity is an agricultural activity that aims to increase nutritional status by diversifying foods produced and eaten. The activity includes a WASH component to expand access to clean drinking water and improved sanitation and also includes water resources management efforts to promote effective small irrigation technologies. In another example, the DRC mission’s Integrated Health Project aims to improve the enabling environment and increase the availability of services, products, and practices for family planning; maternal, newborn, and child health; nutrition, malaria, and tuberculosis; and WASH in targeted health zones. Additionally, the nine missions reported implementing WASH activities targeting three types of geographic areas: rural, peri-urban (e.g., small towns), and urban. For example, the Tanzania mission’s Integrated Water, Sanitation, and Hygiene activity targeted rural and peri-urban areas (e.g., small towns) to provide support including community piped water schemes, rehabilitated wells, training for community groups to operate and maintain water systems, and school latrines, among other efforts. In contrast, the Indonesia mission’s Indonesia Urban Water, Sanitation, and Hygiene activity aimed to increase access to clean water and improve sanitation facilities for people in urban settings, where about half of Indonesia’s population lives. According to the Indonesia mission, the activity focused on fostering demand for WASH, building capacity, and strengthening the policy and financing environment for WASH through its work with central and local governments, including entities responsible for delivering water and sanitation services. Table 3 includes information about the geographic focus of the 74 activities in the nine selected countries. In the nine countries we selected for our review, we found that USAID missions are taking steps to develop and implement WASH plans and are incorporating the Water Strategy’s principles and approach into recent and planned WASH activities. The missions are considering sustainability as part of WASH project planning, and USAID’s Office of Water is developing guidance for missions specific to WASH sustainability. We found that the missions in the nine selected countries have made varying degrees of progress in developing WASH plans. In addition, we found that these missions have generally taken steps to implement WASH plans and enhance the strategic approach in recent WASH projects. Our review of documents describing these missions’ strategic approach to WASH, as well as interviews with officials at the nine missions, found that five of the missions had completed WASH plans. Missions’ WASH plans may consist of one or more project appraisal documents (PAD), which document a project’s design and expected results. The remaining four missions were in the process of developing or finalizing such plans. Five of the nine missions—in Ethiopia, Indonesia, Kenya, Tanzania, and Zambia, respectively—had completed WASH plans and taken steps to implement these plans as of July 2015. The Indonesia and Tanzania missions initially developed WASH plans in fiscal year 2009. The Zambia, Kenya, and Ethiopia missions completed WASH plans more recently, in the period from fiscal year 2013 to fiscal year 2015. Three of the nine missions—in the DRC, Senegal, and Uganda, respectively—were in the process of finalizing WASH plans as of July 2015. One mission, in Haiti, had begun developing a plan for a more strategic approach to its WASH activities as of February 2015. According to mission officials, the mission had focused its recent water-related activities on agriculture, water resources management, and water productivity. In response to the Water Strategy, the mission has begun to develop a plan for future WASH activities that will consider findings from a fiscal year 2015 mission WASH sector assessment. Our review of missions’ WASH project documents, as well as semi- structured interviews with officials at the nine missions, also found that these missions have generally taken steps to implement WASH plans and incorporate the Water Strategy’s strategic principles and approach in recent WASH projects, such as by implementing activities to support expanded access to water and sanitation infrastructure along with activities to support capacity building and behavior change. Although four of the nine missions acknowledged that their prior WASH projects lacked strategic focus, all nine missions cited ongoing efforts to develop more focused, strategic, and impactful WASH projects in accordance with the Water Strategy. government support and leadership for WASH were critical to sustainability and continued improvement in the sector. The Tanzania mission’s primary WASH activity, the Tanzania Integrated Water, Sanitation, and Hygiene Program, focused on increasing water supply through a multiple-use water services approach, which the mission’s WASH plan describes as considering domestic and productive water needs to maximize benefits and increase the cost-effectiveness of WASH investments (see sidebar). Missions that recently completed, or are in the process of finalizing, WASH plans have also taken steps to incorporate the Water Strategy’s principles and approach into recent activities, such as by implementing activities to support expanded access to “hardware” (e.g., water and sanitation infrastructure), as well as to support “software” activities (e.g., efforts to build capacity and promote behavior change). For example, the Zambia mission’s Schools Promoting Learning Achievement through Sanitation and Hygiene activity provided support for latrine construction, rehabilitated water points, and handwashing facilities, as well as hygiene education and capacity building related to operations and maintenance of WASH infrastructure. In addition, missions have supported community-led total sanitation efforts to create demand for improved sanitation. For instance, the Senegal mission’s Millennium Water and Sanitation activity included support for community-led total sanitation, among other components. Additionally, missions cited examples of steps they have planned to enhance the strategic focus and potential impact of WASH projects going forward, such as by increasing investments in sanitation, encouraging local government ownership of efforts, supporting public-private partnerships and WASH financing options, and enhancing activities’ focus on gender issues. For example, six of the nine missions—in Ethiopia, Indonesia, Senegal, Tanzania, Uganda, and Zambia, respectively—cited plans to increase the focus of their WASH activities on sanitation. The Uganda mission noted that although it had previously implemented several activities that addressed sanitation on a small scale, it plans to focus exclusively on sanitation in its upcoming primary WASH activity. According to the mission, the planned activity will seek to improve access to affordable and acceptable sanitation through public-private partnerships, support for affordable sanitation financing options, and subsidization schemes to reach the poor. Figure 4 summarizes our findings regarding the status of the nine selected missions’ WASH plans and their recent and planned WASH activities as of July 2015. Appendix II provides additional information about the selected missions’ WASH activities. Our review of mission documents and information showed that the nine selected missions are generally taking steps to address sustainability as part of their WASH project planning, as described below: WASH PADs for all five of the missions that had completed these documents—in Indonesia, Kenya, Tanzania, Uganda, and Zambia— included sustainability analyses. For example, the Kenya mission’s WASH PAD included an annex with a two-page sustainability analysis that described the mission’s planned steps to address challenges to sustainable WASH, including challenges related to choice of technologies and technical approaches, weaknesses in governance, long-term financing needs, and the need for sustained behavior changes. The Uganda and Zambia missions’ PADs sustainability analyses did not address WASH specifically but described sustainability considerations in the context of broader project efforts (e.g., health or education). Missions’ award documents for the 16 activities we reviewed in detail generally described sustainability considerations. For example, the award agreement for a WASH activity in Senegal noted that decades of development experience have shown that merely building water and sanitation infrastructure is not sufficient to deliver adequate service or to ensure that such services are sustainable. The agreement added that WASH efforts would incorporate principles such as optimal balance between hardware and software activities, local ownership and decentralized management of WASH infrastructure and service delivery, and use of appropriate and affordable technologies. Five of the nine missions conducted more in-depth WASH sustainability assessments to inform planning and design of WASH activities. The Ethiopia mission completed two studies between April 2012 and February 2013 to assess existing water supply schemes and potential strategies to improve sustainability for the mission’s primary ongoing WASH activity, which was implemented in several regions. In addition, in 2013 and 2014, USAID piloted a tool that it developed with Rotary International to provide an in-depth assessment of the sustainability of several WASH activities implemented by missions in three of the nine countries we selected for our review—Indonesia, Kenya, and Tanzania. The assessments included an overview of the WASH sector in each country; a description of steps taken to apply the sustainability tool; results of using the tool, including scores for institutional, management, financial, technical, and environmental sustainability; key findings; and priority areas for action and recommendations by intervention category (i.e., water supply, sanitation, or hygiene). A USAID official noted that for this type of tool to be effective, USAID would need to use it systematically and incorporate a feedback loop to ensure that results are incorporated into future activities. The Indonesia mission incorporated the results of using the tool into its latest WASH plan, while the Kenya and Tanzania missions completed their latest WASH plans before the results of the tool were available. However, Kenya and Tanzania mission officials stated that they are considering results of the assessments in planning their upcoming activities. Mission officials in the selected countries generally told us that they plan to enhance the focus of future activities on sustainability. For example, officials at the Senegal mission noted that they conducted a sustainability analysis in the process of developing the mission’s forthcoming PAD related to WASH. To enhance its focus on sustainability, the mission is planning, among other efforts, to implement private sector reform activities and more directly engage with the government of Senegal to better ensure ownership of water and sanitation projects. Mission officials in the DRC said that they are in the process of determining the cost of monitoring all of their WASH activities after implementation is completed to better assess sustainability. The officials noted that UNICEF, which is implementing one of the mission’s WASH activities, is using its own resources to monitor the activity to assess sustainability 1 year after completion and that the mission would like to standardize the process for other activities. USAID’s Office of Water has begun to develop guidance to address WASH sustainability. The March 2014 Field Guide states that USAID’s Office of Water will develop guidance for missions to program effectively for sustainable WASH services, including sustainability indicators, monitoring options, and tools for assessing sustainability. To inform future WASH sustainability guidance, in December 2014, USAID’s Office of Water developed a draft technical paper that included recommendations for designing projects to achieve sustainable WASH service and for confirming sustained results through longer-term monitoring. The draft paper described, among other things, the types of indicators that could be used to assess sustainability, including the functionality and reliability of WASH services. Additionally, the draft paper recommended that USAID’s Water Office should supplement its sustainability guidance with questions and resources for missions to consider in WASH project design, adopt sustainability indicators to assess the functionality and reliability develop an approach to post-project monitoring, and conduct an evaluation on the sustainability of WASH services in a sampling of countries. According to USAID officials, USAID’s Office of Water intends to finalize its guidance for addressing the sustainability of WASH activities in 2015. The officials stated that the guidance will likely be broad and include a menu of options, tools, and resources that can be tailored to the context in which each mission implements WASH activities. Our detailed review of 16 WASH activities in the nine selected countries found limitations in the monitoring and reporting of some activities’ performance, although most evaluations we reviewed were sufficiently reliable and methodologically sound for their intended purposes. Monitoring plans for 6 activities did not consistently include annual targets for key WASH indicators in accordance with USAID requirements, limiting the missions’ ability to measure progress toward WASH goals. Also, while the annual reports submitted by the 16 activities’ implementers generally included performance data for the key WASH indicators, the reports for 6 activities did not present data disaggregated by gender as required by USAID policy. Moreover, contrary to agency guidance, missions did not verify beneficiaries for at least 3 activities aimed at increasing access to improved drinking water sources and overstated beneficiaries for 6 activities aimed at increasing access to improved sanitation facilities, calling into the question the accuracy of USAID’s annual reporting about progress toward these WASH goals. The reasons for the missions’ inconsistent adherence to agency guidance regarding annual targets, gender-disaggregated data, and verification of beneficiaries were generally unclear, while USAID officials provided differing reasons for the inaccurate reporting for sanitation activities. In contrast, 12 of the 14 performance evaluations that we reviewed were sufficiently reliable and methodologically sound for their intended purposes. For more than one-third of the 16 activities we reviewed, the monitoring plans did not consistently include annual targets for key WASH indicators, such as the number of people gaining access to an improved drinking water source. USAID requires that at the start of each activity, the implementer establish a monitoring plan with indicators and associated targets to assess progress on an activity. Additionally, State guidance to USAID missions requires that proposed activities demonstrate impact through objectively verifiable indicators to measure progress toward WASH goals, if funds are attributed to the annual congressional spending requirement for international water and sanitation assistance. We found that, of the 16 activities, 3 lacked annual targets for key WASH indicators for the entire duration of the activity and 1 lacked annual targets for key WASH indicators for the remaining 2 years of the activity’s 5-year duration. Additionally, 1 activity included annual targets for the drinking water and sanitation component but not for the hygiene component; for another activity, annual targets were clearly identified for 2 years of its 5- year duration. While annual reports for the 16 activities we reviewed generally included performance data for key WASH indicators, the reports for more than half of activities with beneficiary indicators did not disaggregate data for these indicators by gender as USAID policy requires. Ten activities had indicators to measure numbers of beneficiaries gaining increased access to an improved water source or improved sanitation facility. However, for 6 of these activities, the missions did not disaggregate performance data by beneficiaries’ gender, making it difficult for USAID to assess the activity’s contributions to gender equality and female empowerment. Reasons for the inconsistency that we observed in the selected missions’ adherence to agency guidance regarding annual targets and gender- disaggregated data were unclear. USAID officials in Washington, D.C., noted the absence of consistent reasons among the nine missions for a lack of regular reporting on annual targets and of gender-disaggregated data. Officials at the DRC mission noted that staffing constraints had generally limited their ability to monitor their WASH activities and that they planned to hire additional staff to improve monitoring. Figure 5 summarizes our findings regarding the nine selected missions’ compliance with USAID requirements in documenting targets and reporting performance data for the 16 activities that we reviewed. For some of the 10 activities with a water component that we reviewed, the missions did not verify the numbers of beneficiaries as required, and for most of the 8 activities with a sanitation component, the missions overstated the numbers of beneficiaries. As a result, because activity performance data contribute to USAID’s annual public reporting of WASH results, the data that USAID uses to report the numbers of people gaining access to an improved drinking water source and to an improved sanitation facility annually may not be accurate. The reasons for lack of verification for water beneficiaries were generally unclear, and mission officials, as discussed later, generally provided varying reasons for the inaccurate reporting for sanitation activities. Figure 6 summarizes our findings regarding the nine selected USAID missions’ compliance with State and USAID requirements for verifying beneficiaries of water activities and reporting beneficiaries of sanitation facilities. For 7 of the 10 activities that we reviewed, the missions and implementers did not undertake efforts to verify reported beneficiaries or did not document such efforts, calling into question the accuracy of the reported results. State and USAID guidance for the standard indicator of access to an improved drinking water source requires that the implementer or evaluator verify these estimates by assessing factors such as the amount of time the user spent in collecting water and the quantity of water produced by the new or rehabilitated water source. The reasons for the lack of verification, or documentation of verification, of beneficiaries of the 7 activities were generally unclear. Documents for only 3 of the 10 activities—in Tanzania, Ethiopia, and Indonesia, respectively—evidenced verification efforts and, in one case, corrective actions related to verifying results of activities to provide access to improved drinking water. For 4 other activities, it was unclear, on the basis of documents that the missions provided and mission officials’ responses to our queries, whether the reported results were verified. For example, although officials at the Senegal mission stated that they undertook regular site visits and data quality assessments to verify estimated beneficiaries, they did not provide documentation of such verification. Moreover, the midterm evaluation for this activity noted that estimating the number of beneficiaries on the basis of the Senegalese government’s assumptions was feasible at the planning stage but not during implementation, when the number of beneficiaries could be verified. The evaluation also noted that, absent precise numbers of beneficiaries, it was difficult to determine the cost-effectiveness of the investment. For the remaining 3 activities—2 in the DRC and 1 in Zambia—mission officials informed us that they had not verified the reported results and cited varying reasons. In the DRC, annual reports for 2 of the WASH activities we reviewed indicated that more than 900,000 people gained access to an improved drinking water source between fiscal years 2012 and 2014. In USAID’s annual report for fiscal year 2013, the number of beneficiaries reported as gaining access to an improved drinking water source in the DRC constituted more than 13 percent of worldwide beneficiaries (446,989 out of 3,509,090). Officials at the DRC mission informed us that they had been unable to conduct a data quality assessment planned for October 2014 because of challenges that included security concerns and difficulties of traveling in a country with limited roads. Mission officials also noted that estimating numbers of people who gained access to an improved drinking water source was complicated by the displacement of population in the areas where USAID was implementing its WASH activities. In Zambia, the mission reported in fiscal years 2012 and 2013 that 82,606 and 62,098 people, respectively, gained access to an improved drinking water source as a result of the mission’s two school WASH activities. Officials at the Zambia mission informed us that although they undertook verification of data for the ongoing school WASH activity’s sanitation component, they did not verify all reported beneficiaries of the water component, which included the school population as well as people from the surrounding communities. The officials noted that while beneficiaries gaining access to improved drinking water in schools could be verified, it was not possible to verify beneficiaries from communities surrounding the school, who also have access to water points constructed or rehabilitated under this activity. For six of the eight activities aimed at increasing access to improved sanitation facilities that we reviewed, the activities’ implementers reported beneficiaries for facilities that did not meet USAID’s definition of an improved facility. As a result, the data reported to track progress toward the goal of increasing access to improved sanitation are likely overstated. USAID uses WHO and UNICEF definitions of an improved sanitation facility, which state that a pit latrine without a slab or platform is an unimproved sanitation facility and that only facilities that are not shared or are not public are considered improved. For six of the eight sanitation activities that we reviewed, implementers tracked and reported numbers of people gaining access to sanitation facilities that included unimproved latrines and shared facilities. Mission officials generally provided differing reasons for the inaccurate reporting for these activities, such as perceived agency emphasis on reporting beneficiaries and adherence to host-government policy or practice. USAID officials in Washington, D.C., noted that, to some extent, the missions’ inconsistency in accurately reporting on the sanitation indicator resulted from inadequate understanding of USAID’s definition of improved sanitation facilities among some mission staff overseeing these activities. In the DRC, more than 520,000 people gained access to an improved sanitation facility in fiscal years 2012 through 2014 as a result of the mission’s primary WASH activity, according to activity annual reports. However, according to mission officials, the activity’s reported beneficiaries included those who gained access to household-built latrines, including basic pit latrines. USAID officials in Washington, D.C., stated that community-led total sanitation, which reflects the DRC mission’s approach, involves changing people’s behavior regarding use of sanitation facilities by encouraging them to build latrines; however, the officials said that these latrines generally do not meet USAID’s definition of improved sanitation facilities. These officials noted that the results of community-led total sanitation efforts should be reported for USAID’s WASH indicator for a community becoming open-defecation free rather than for the WASH indicator for increased access to an improved sanitation facility. According to officials at the DRC mission, the mission reported results of community-led total sanitation efforts for the indicator for first-time access to an improved sanitation facility in part because of perceived USAID headquarters emphasis on reporting numbers of beneficiaries of WASH assistance. Mission officials noted that headquarters’ emphasis on numbers of beneficiaries had, to some extent, led the mission to focus on activities to increase direct access to water and sanitation rather than on efforts to improve institutions or governance. In Ethiopia, according to the final report for one of the activities we reviewed, 385,909 people gained access to improved sanitation in fiscal years 2009 through 2013 as a result of this activity. However, the latrines built through this activity included pit latrines without slabs (see fig. 7 for an example), which do not meet USAID’s definition of improved sanitation facilities; consistent with WHO and UNICEF definitions, USAID categorizes a basic pit latrine without a slab as an unimproved sanitation facility. According to Ethiopia mission officials, the activity did not fund the construction of household latrines but instead encouraged households to build their latrines with locally available materials, in consistency with Ethiopian government policy. Furthermore, mission officials noted that the reported results for this activity were based on data from the Ethiopian government’s Health Management Information System, which uses an official Ethiopian government definition of improved sanitation that is not consistent with USAID’s definition. In Indonesia, Kenya, Tanzania, and Zambia, respectively, the reported beneficiaries of activities to increase access to improved sanitation facilities included people who gained access to shared facilities, such as school toilets. For example, the Zambia mission reported that more than 133,000 people gained access to an improved sanitation facility in fiscal years 2012 through 2014 as a result of the activity focused on increasing WASH access in schools. However, according to USAID guidance, shared sanitation facilities, such as those in schools and hospitals, cannot be included in the results to track progress on the number of people gaining access to improved sanitation facilities, because this indicator is assessed at the household level. According to officials at the Zambia mission, the data they reported included the entire population of schools where sanitation facilities were built; moreover, the ratio of population to sanitation facility exceeded the national standards. Mission officials noted that they focused on schools rather than households because the mission’s WASH efforts are aimed at improving the environment in schools to improve education outcomes. While we found limitations in the monitoring of several of the 16 activities we reviewed, the WASH activity evaluations that we assessed were, in general, sufficiently reliable and methodologically sound for their intended purposes. Ten of the 14 evaluations in our review assessed 9 of the 16 activities that we selected to examine monitoring. As noted previously in this report, performance monitoring and evaluation are separate activities; evaluations entail collection of additional information from sources that may be different from sources of monitoring data. For example, evaluators of the Ethiopia mission’s primary WASH activity selected a random sample of 6 out of 41 activity sites and obtained information through focus groups with potential beneficiaries, interviews with local government officials, and personal observation of new or rehabilitated water sources. The evaluators used monitoring data where it was available for one indicator (final target and final performance data for beneficiaries gaining access to an improved water source) but independently collected data via surveys to assess the results of hygiene and sanitation component for which the implementer established a final target but did not report on annual targets or performance data. This example also illustrates that while evaluations are distinct from monitoring, they can fill some gaps in monitoring information. (See app. III for USAID’s evaluation findings related to monitoring, outcomes, and sustainability.) We assessed 14 evaluations of USAID’s WASH-related activities—2 baseline evaluations, 6 midterm evaluations, and 7 final evaluations—that were conducted in fiscal years 2012 through 2014 in the nine selected countries. We assessed the methodological quality of these evaluations on the basis of established evaluation principles, including the appropriateness of the evaluation design; clarity of population selection; clarity of data collection; and adequacy of support for findings, conclusions, and recommendations. Of the 14 evaluations, 7 had clearly supported findings, conclusions, and recommendations, while 5 had certain limitations in their support for some findings, conclusions, or recommendations. These limitations included insufficient details about data collection and unclear or inappropriate criteria for population selection, given research objectives. However, we determined that, while such limitations can lead to unsupported findings and limit the usefulness of findings, conclusions, and recommendations, the limitations in these 5 evaluations were either clearly stated or otherwise did not substantially detract from the evaluations’ overall purpose or utility. The 2 remaining evaluations had significant design limitations that suggested a lack of appropriate support for at least one finding, conclusion, or recommendation. For example, the sampling approaches for these evaluations were problematic. Both evaluations selected locations or participants on the basis of convenience, which is a nongeneralizable sampling method. This approach was not appropriate, because the evaluations’ research questions were aimed at generalizing to the entire population or establishing a baseline for the future. Figure 8 shows the results of our assessment of the 14 evaluations of USAID’s WASH activities. We found that in some cases, USAID incorporated evaluation results to improve activity monitoring. Our interviews with mission officials and reviews of monitoring documents indicated that the selected USAID missions modified ongoing or planned activities—for at least 6 of the 10 we reviewed that had relevant evaluations—on the basis of evaluation results. For example, the midterm evaluation of the primary WASH activity in Indonesia recommended reducing the activity’s target of 40,000 households willing to pay for sanitation improvements, because of the challenges involved in meeting this goal. As a result, the Indonesia mission modified its agreement with the implementer and reduced the target to 15,000 households. Mission officials noted that the midterm evaluation also provided specific recommendations for future USAID projects that would be taken into consideration during the design process for follow-on WASH investments in Indonesia. Of the 7 final performance evaluations of WASH activities that we reviewed, 3 were conducted before the activity’s completion, 3 were conducted within 1 month of the activity’s completion, and 1 was conducted within 3 months of the activity’s completion. As a result, these studies were not set up to allow for an assessment or the longer-term sustainability of these projects. To enable assessments of activities’ impact and sustainability, USAID has identified plans to conduct evaluations for some WASH activities several years after project completion. Specifically, USAID’s Water Strategy notes that the agency plans to conduct assessments of WASH sustainability beyond the typical USAID project cycle and to provide support for issues that arise subsequent to completion of WASH activities. Additionally, in an April 2014 document, USAID noted plans to increase its focus on sustaining development outcomes by, among other things, conducting evaluations 3 to 5 years after project conclusion. According to USAID, such long-term evaluations provide opportunities to explore the impact of interventions and may contribute to a deeper understanding of programmatic risk. Since 2005, USAID has reported that millions have gained access to improved drinking water and sanitation facilities as a result of its assistance. In 2013, USAID issued its first Water Strategy, articulating goals for the provision of sustainable WASH assistance. In response, USAID missions in a number of priority countries are developing a more strategic approach to WASH, and USAID’s Office of Water and missions have begun taking steps to address sustainability of WASH investments. Nevertheless, these efforts are still in the early stages. However, limitations in some missions’ monitoring and reporting for WASH activities that we reviewed call into question USAID’s ability to reliably assess and report progress toward its strategic WASH goals. Unless USAID identifies and addresses factors contributing to missions’ inconsistent adherence to guidance regarding establishing annual targets for key WASH indicators for all WASH activities and disaggregating activity data by gender, USAID cannot reliably measure these activities’ contributions to achieving WASH goals or toward gender equality and women’s empowerment. Moreover, unless USAID identifies and addresses factors contributing to missions’ inconsistent adherence to guidance for verifying beneficiaries of water activities and accurately reporting beneficiaries of sanitation activities, USAID cannot ensure the accuracy of its annual reports regarding progress in increasing access to safe water and sanitation. To effectively address limitations in missions’ monitoring and reporting of USAID’s WASH activities, we are making the following two recommendations to the USAID Administrator. Specifically, with respect to inconsistent adherence to agency guidance for establishing annual targets, for reporting gender disaggregated data, for verifying beneficiaries of water activities, and for accurately reporting beneficiaries of sanitation activities, USAID should identify factors contributing to missions’ inconsistent adherence to agency guidance and take steps to address these factors. We provided a draft of this report to State and USAID. USAID provided written comments, which appear in appendix IV, as well as technical comments that we incorporated as appropriate. State did not provide comments. In its written comments, USAID generally concurred with our recommendations and outlined steps it is taking to address our second recommendation. Following are highlights of USAID’s comments, with our evaluation: 1. USAID noted that our definition of “obligations” excludes bilateral obligations that have not yet been sub-obligated at the mission-level for WASH activities. As our report states, for the purposes of our review, we defined obligations as orders placed, contracts awarded, and similar transactions during a given period that will require payments during the same or a future period. USAID categorizes these as “sub-obligations,” because it considers these funds to have been obligated through a bilateral agreement with the host country. In reporting funding for nine selected missions’ WASH activities, we generally reported total allocations for WASH activities ongoing at each mission in fiscal years 2012 through 2014, including, when applicable, funding for years before fiscal year 2012 and through fiscal year 2014. (State and USAID define allocations as the distribution of resources to bureaus and operating units by foreign assistance account.) We included obligations when USAID data did not show WASH activity allocations. 2. USAID stated that it develops targets through multiple processes and that it had provided us with performance plans and reports, which include future-year targets for missions. On the basis of USAID’s technical comments on our draft report, we revised our assessment to acknowledge that the mission-wide performance management plans and reports for fiscal years 2012 and 2013 included annual targets for the two activities we had selected for Zambia (“Schools Promoting Learning Achievement through Sanitation and Hygiene” and “Partnership for Integrated Social Marketing”). Although we included this information in response to USAID’s comments, it is important to note that neither the monitoring plans nor the annual reports for these two activities included annual targets. According to USAID policy (ADS ch. 203), activity-level monitoring plans feed into project-level monitoring plans and mission-wide performance management plans. Therefore, the absence of annual targets in activity-level monitoring plans calls into question the completeness of the Zambia mission- wide performance management plan. Additionally, given that mission- wide performance plans and reports provide only aggregate target and performance data, it may not be possible to identify activity- specific targets or performance data from these reports. 3. USAID stated that our report does not reference certain key documents that USAID had provided. Although USAID’s letter does not specify which key documents it is referring to, in its technical comments, USAID refers to mission-wide performance plans and reports and to State’s annual Water Key Issue Definition guidance. In our report, we refer to performance plans and reports as mission-level annual reporting. In appendix I of our report, we also note that to assess data reliability, we compared implementer reporting on an activity with mission-level annual reporting for the nine selected USAID missions. In addition, our report notes that we reviewed State’s annual Water Key Issue Definition guidance and assessed the extent to which WASH activities in the nine missions generally adhered to this guidance’s requirement regarding activities for which funds can be attributed to the congressional spending requirement for water and sanitation. We described the content of State’s guidance in the background section of our report, where we added a footnote, in response to USAID comments, to more clearly identify the guidance documents we referred to. 4. USAID stated that the inconsistencies identified in our draft report pertaining to the inaccurate categorization of results were the byproduct of isolated incidences of reporting against incorrect indicators. We found inaccuracies in six of eight activities we reviewed that reported on the standard indicator “number of people gaining access to an improved sanitation facility.” While USAID’s comments confirmed these inaccuracies, the agency has not provided any documentation or other support for its statement that they represent isolated incidences. 5. USAID stated that missions are allowed to use custom indicators to track results against water-directive funded activities. We did not intend to imply that use of custom indicators was not in compliance with State and USAID requirements. While we assessed the selected missions’ use of a number of WASH indicators to track progress for 16 selected activities, our review focused in particular on the standard indicators for access to improved water source and improved sanitation. As our report notes, State and USAID have reported overall progress on WASH using these two indicators, and USAID’s Water and Development Strategy (2013-2018) uses these two indicators to project numbers of beneficiaries during the strategy’s 5- year period (i.e., at least 10 million persons would receive sustainable access to improved water supply and 6 million persons would receive sustainable access to improved sanitation). Nevertheless, in our report’s background section, we include a table of standard and custom indicators that USAID allows missions to use in reporting on results of WASH activities. Additionally, on the basis of a follow-up discussion with USAID, we understand that the agency was particularly concerned that our report might seem to imply that its Indonesia mission’s use of a custom indicator for verifying households for the mission’s Water Hibah activity was not in compliance with agency guidance. As a result, we have added a note to figure 6 to clarify that USAID guidance allows missions the flexibility to report using custom indicators. 6. USAID noted that it informed us that verification information could be found in data quality assessments and site visit reports from missions such as Senegal. During our audit work, after the mission informed us that it had conducted verification of beneficiaries through these assessments, we requested data quality assessments from the USAID mission in Senegal. However, the mission did not provide these assessments to us. In the absence of documentation of verification of beneficiaries, we maintain that it is unclear whether the mission verified the number of beneficiaries gaining access to an improved water source. We are sending copies of this report to the appropriate congressional committees, the USAID Administrator, and the Secretary of State. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3149 or gootnickd@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix V. In this report, we (1) describe the types of water supply, sanitation, and hygiene (WASH) activities the U.S. Agency for International Development (USAID) has implemented in selected countries and the funding it has provided for these activities; (2) assess the extent to which USAID guidance has informed the agency’s efforts to plan and implement WASH activities in these countries; and (3) assess USAID’s monitoring and evaluation of selected WASH activities. We focused our review on WASH activities that USAID missions had implemented in 9 selected countries: the Democratic Republic of the Congo (DRC), Ethiopia, Haiti, Indonesia, Kenya, Tanzania, Senegal, Uganda, and Zambia. We selected these countries from the list of 22 countries that USAID designated as priority countries for WASH assistance in fiscal year 2014. We based our country selection primarily on the amounts of funding that the missions in these countries attributed to the congressional spending requirement for international water and sanitation assistance in fiscal years 2012 through 2013. During that period, WASH assistance in the 9 selected countries accounted for 53 percent—$155 million—of the funding attributed to the spending requirement by the missions in all tier 1 and tier 2 priority countries. To address our three reporting objectives, we interviewed USAID and Department of State (State) officials and conducted semi-structured telephone interviews with officials at the nine USAID missions regarding their efforts to plan, monitor, and evaluate WASH activities. Because we judgmentally selected the nine USAID missions for our review, our findings from these interviews cannot be generalized to all USAID missions. In addition, we conducted fieldwork in Tanzania and Ethiopia from January 26, 2015, to February 6, 2015. We selected these countries for fieldwork based on factors such as level of funding and type of WASH activities implemented in each country. For example, the Ethiopia mission attributed the most funding to the congressional spending requirement in fiscal years 2012 through 2013 for a range of WASH activities, and the Tanzania mission has implemented one long-standing (since 2009) primary activity focused on WASH. To describe the types of WASH activities that USAID has implemented and the funding it has provided for these activities, we developed a data collection instrument to obtain funding data and descriptive information from each mission regarding its WASH activities that were ongoing or planned during fiscal years 2012 through 2014. Each mission’s total funding for WASH activities generally represents allocations for WASH components of activities that were ongoing at the mission at any time in fiscal years 2012 through 2014, including, when applicable, WASH funding for years before fiscal year 2012 and through fiscal year 2014. We included obligations in cases for which USAID was not able to provide allocations for WASH activities, such as when a USAID mission obligated unplanned funding to an activity for WASH. To assess the reliability of the data and information we obtained, we reviewed documentation and interviewed agency officials to identify and correct any missing data and any errors. We determined that the data and information we gathered were sufficiently reliable to provide general information about the types of activities implemented and approximate funding provided for these activities. To assess the extent to which USAID guidance informed planning and implementation of WASH activities, we reviewed USAID guidance, mission-level WASH plans, and other documents. We reviewed USAID guidance including: (1) USAID’s Automated Directives System (ADS), chapter 201, which contains agency policies and procedures and includes guidance on strategic planning and project and activity design; (2) USAID’s Water and Development Strategy, 2013-2018 (Water Strategy); and (3) USAID’s Water and Development Strategy Implementation Field Guide (Field Guide). We assessed the status of the USAID missions’ efforts to develop WASH plans, steps that they have taken to implement these plans, and steps that they have taken to adopt the Water Strategy’s principles and approach for their recent or planned WASH activities. Missions’ WASH plans may consist of one or more project appraisal documents (PAD), which document a project’s design and expected results, as well as a WASH sector assessment or other related documents that describe the mission’s strategic approach to WASH. To assess steps that missions have taken to address sustainability, we reviewed documents that included PADs and sustainability assessments; we also reviewed award documents for selected activities. We generally selected two activities per country on the basis of factors such as the level of funding allocated and types of WASH activities implemented in fiscal years 2012 through 2014. We based our activity selection on funding data reported in State’s Foreign Assistance Coordination and Tracking System for fiscal years 2012 and 2013, data included in missions’ operational plans for fiscal year 2014, and discussions with mission officials about their WASH activities between fiscal years 2012 through 2014. The activities we selected included those to which the missions in the selected countries had allocated the largest amounts of WASH funding, with one exception: In Uganda, the selected activities did not include the Uganda mission’s activity that received the largest allocations of WASH funding in fiscal years 2012 through 2014, because the mission had allocated the majority of funding for its largest WASH activity before fiscal year 2012, did not initially inform us of this activity, and did not provide the prior-year funding data until after we had made our activity selection. We selected one activity in Tanzania, because the Tanzania mission implemented only one primary WASH activity during the period we reviewed. We initially selected three WASH activities in Haiti. However, the Haiti mission subsequently informed us that two of the selected activities did not have a WASH component and that the mission had incorrectly attributed WASH funds to these activities and planned to take corrective action to ensure use of attributed funds for WASH. As a result, we reviewed one WASH activity for Haiti. To assess monitoring of WASH activities in the nine countries, we obtained and analyzed documentation for selected WASH activities, including award agreements and modifications, performance management plans, monitoring and evaluation plans, quarterly and annual monitoring reports, and annual funding data. We compared monitoring plans with quarterly and annual reports to determine whether specific WASH indicators had been identified for the activity. Since several activities had multiple WASH indicators, we focused our analysis on standard indicators related to water, sanitation, or hygiene that were used to monitor the activity. When an activity’s monitoring plans included no standard WASH indicators, we identified at least one activity indicator related to access to water, access to sanitation, or hygiene improvement, as relevant. For the purposes of this report, we refer to the indicators that we identified for our review as key WASH indicators. We reviewed each activity’s monitoring plan and monitoring reports to determine whether (1) annual targets were identified for the key WASH indicators and (2) results or performance data for each key WASH indicator were reported on an annual basis, if applicable. Further, we reviewed monitoring reports to assess whether they included gender-disaggregated data for indicators on number of people gaining access to an improved drinking water source or number of people gaining access to an improved sanitation facility. We compared the reported performance data for three key WASH indicators—number of people gaining access to improved drinking water source, number of people gaining access to an improved sanitation facility, and number of liters of drinking water disinfected with point-of-use treatment products as a result of U.S. government assistance—with USAID and State’s guidance for these indicators to assess the extent to which the reported data conformed to the definitions in the guidance document. Although USAID and State guidance required verification of estimated beneficiaries for the indicator for access to an improved water source, the guidance did not require verification of estimated numbers of liters of water purified. In addition to assessing activity performance data against agency guidance, we conducted internal consistency checks to assess the reliability of reported data. For example, to the extent feasible, we compared implementer reporting on an activity with mission-level annual reporting and USAID-wide annual reporting on WASH indicators. Because we found data inconsistencies for several activities, as noted in the report, we did not use the performance data to report the extent to which activities met intended targets. To assess USAID evaluations for WASH activities in the nine countries, we selected all evaluations for WASH activities completed in fiscal years 2012 through 2014. We identified 14 completed evaluations, including evaluations for 10 of the 16 activities we selected to assess monitoring. To assess the soundness of the evaluations, we reviewed background information about programs and evaluation questions, assessed evaluation design and process, and considered the evaluation results and limitations of each study. Two GAO specialists conducted these assessments independently, using a tool that incorporated key elements of USAID’s scope-of-work checklist for evaluations and considered various aspects of these issues. The two specialists compared the results of their independent assessments and came to agreement about all conclusions. We did not assess selected USAID missions’ compliance with the requirement to conduct evaluations. To review background information about the evaluated programs, we considered whether the evaluations considered evaluator independence, program objectives and mechanisms, and evaluation goals. We also assessed whether the relationship between the evaluation objectives and program design was clear and appropriate. To assess evaluation design and process, we considered whether the evaluations clearly described their design; whether appropriate methods were used to select the locations and people covered by the study, including whether the evaluations provided sufficient detail about sampling methods; whether measures used were clearly related to evaluation questions; and whether data collection and analysis were sufficient and appropriate. We also assessed whether selection methods, sample sizes, criteria, measures, data collection, analysis, and overall design were appropriate, given the evaluation objectives. To consider evaluation results and limitations, we determined whether findings, conclusions, recommendations, and lessons learned were clearly stated, whether stakeholders were given an opportunity to comment on the results, and whether evaluations provided information about how results should be used. We also assessed whether the evaluations clearly and sufficiently described assumptions and limitations of their design and results, including potential biases, confounding variables, unintended consequences, alternative explanations, and methodological limitations. We assessed whether any findings, conclusions, recommendations, or lessons learned were appropriately supported and caveated, given the evaluation design. To summarize evaluation results, we determined whether the sections of evaluations related to program monitoring, outcomes, or sustainability contained descriptive information or appropriately supported findings, conclusions, recommendations, or lessons learned. We conducted this performance audit from July 2014 to October 2015 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Following are funding data and descriptive information about water, sanitation, and hygiene (WASH) activities implemented in fiscal years 2012 through 2014 in the nine countries that we selected for our review— the Democratic Republic of the Congo, Ethiopia, Haiti, Indonesia, Kenya, Tanzania, Senegal, Uganda, and Zambia. The U.S. Agency for International Development (USAID) mission in the Democratic Republic of the Congo implemented seven WASH activities in fiscal years 2012 to 2014, with WASH activity funding totaling $26,125,075 (see table 4). The mission reported two primary WASH activities—the Integrated Health Project and Sustainable WASH Interventions: Healthy Villages Program in Two Health Zones—focused on improving access to water, sanitation, and hygiene services in target locations. Allocations of WASH funding for these two activities totaled $22,518,470. The USAID mission in Ethiopia implemented 11 WASH activities in fiscal years 2012 through 2014, with WASH activity funding totaling $24,055,770 (see table 5). The mission’s largest WASH activity, called Water, Sanitation, and Hygiene Transformation for Enhanced Resilience, included $10,984,723 in funding for WASH and focused on providing water infrastructure in pastoralist areas. Figure 9 shows a drinking water reservoir and a water point constructed through the Ethiopia mission’s largest WASH activity. The USAID mission in Haiti implemented six WASH activities in fiscal years 2012 through 2014, with WASH activity funding totaling $4,436,481 (see table 6). The mission’s largest WASH activity, called Santé pour le Développement et la Stabilité d’Haiti, included $1,711,000 in funding for the WASH component of a broader activity that focused on improving the health status of Haitians through improved primary care, referral networks, and management practices at health facilities and in communities. The USAID mission in Indonesia implemented five WASH activities in fiscal years 2012 through 2014, with WASH activity funding totaling $53,401,700. The mission’s largest WASH activity, called Indonesia Urban Water, Sanitation, and Hygiene, included $38,696,403 in WASH funding and focused on providing access to water and sanitation facilities in urban areas. The USAID mission in Kenya implemented 19 WASH activities in fiscal years 2012 through 2014, with WASH activity funding totaling $18,245,655. The mission’s largest WASH activity, called AIDS, Population and Health Integrated Assistance Program (APHIA) Plus Northern Arid Lands Service Delivery, included $3,024,887 in funding for WASH. The activity focuses on HIV/AIDS, maternal and child health, WASH, and nutrition for orphaned and vulnerable children. The mission reported five similar activities targeting other regions of the country. The USAID mission in Senegal implemented five WASH activities in fiscal years 2012 through 2014, with WASH activity funding totaling $27,616,000. The mission’s largest WASH activity, called Senegal Millennium Water and Sanitation Program, included $20,866,000 in funding for WASH. The activity focused on governance and management, the creation of local business opportunities, increasing demand for clean water and sanitation, the construction of infrastructure, and hygiene. The USAID mission in Tanzania implemented three WASH activities in fiscal years 2012 through 2014, with WASH activity funding totaling $17,753,586. The mission’s primary WASH activity, called Tanzania Integrated Water, Sanitation and Hygiene Program, included $17,443,586 in funding for WASH. The activity supported community piped and gravity-fed water schemes, rehabilitated wells with rope pumps, training for community groups responsible for operations and maintenance of water schemes, and school latrines. Activity also included a water resource management component. Figure 10 shows a demonstration rope pump, which the Tanzania Integrated Water, Sanitation and Hygiene Program supported as a cost- effective, easy-to-maintain technology, and a piped water point in the village of Mvumi, Wami-Ruvu River Basin, Tanzania. The USAID mission in Uganda implemented 15 WASH activities in fiscal years 2012 through 2014, with WASH activity funding totaling $15,284,415. The mission’s largest WASH activity, called Northern Uganda Development of Enhanced Local Governance, Infrastructure, and Livelihoods, included $6,225,000 in funding to support local government efforts in northern Uganda to expand basic WASH services. Another activity, called WASHPlus: Supportive Environments for Healthy Communities, included $500,000 in funding for WASH and aimed to build the capacity of district government and USAID implementing partners for WASH efforts to support community-led total sanitation, promoting handwashing in villages, and integrating WASH with nutrition and HIV/AIDS services and programs. The USAID mission in Zambia implemented three WASH activities in fiscal years 2012 through 2014, with WASH activity funding totaling $27,600,000. The mission’s largest WASH activity, called Schools Promoting Learning Achievement through Sanitation and Hygiene, included $15,200,000 in funding for WASH and focused on providing WASH services in schools. Efforts included hygiene education, capacity building for operations and maintenance, and support to establish private- sector spare-parts supply. Our review of 14 U.S. Agency for International Development (USAID) evaluations of water, sanitation, and hygiene (WASH) activities in nine selected countries found that the evaluations assessed monitoring, outcomes, and sustainability to varying extents. Monitoring. Evaluations reported on various issues, including monitoring, WASH indicators, indicators’ limitations, and data related to gender. All 14 evaluations reported on monitoring of WASH activities. Ten evaluations reported on one or more of USAID’s indicators related to access to drinking water, sanitation, and hygiene, including whether WASH activities were on track to meet their targets. Five evaluations discussed limitations in the quality of one or more indicators used to monitor WASH activities. For example, an evaluation of Senegal’s Yaajeende activity noted limitations of the indicator for the number of individuals trained on improved hygiene behaviors. The evaluation stated that, according to USAID’s guidance, the success of training and other interventions related to human and organizational capacity building is to be measured by improvement in organizational output and performance, not simply by the number of individuals trained. Five evaluations presented and discussed data related to gender, thereby filling some gaps that we had identified related to a lack of disaggregated gender data in monitoring reports. For example, the evaluations for the primary WASH activities in Ethiopia and Senegal assessed women’s participation in community water management committees, which were generally responsible for operations and maintenance (see fig. 11). Outcomes. Six of the 14 evaluations that we reviewed provided insights into WASH activity outcomes. Specifically, these evaluations had findings related to outcomes such as disease incidence, health expenses, school attendance, time spent getting water, economic impacts, and beneficiary satisfaction. For example, the evaluation of Ethiopia’s Water Sanitation and Hygiene Transformation for Enhanced Resiliency found that the activity resulted in access to safe water at a much closer distance than before and also increased access to safe latrines and improved health practices (such as handwashing). In addition, the evaluation found that activity results included increased time for beneficiaries to participate in other productive and income-generating activities, including more time at school, as well as reduced health expenses. Sustainability. Twelve of the 14 evaluations that we reviewed addressed the sustainability of WASH activities. Specifically, these evaluations broadly discussed WASH sustainability issues, and 7 of the 12 had findings related to WASH sustainability challenges. These challenges included limitations related to capacity building, a lack of spare parts, and a lack of funding for operations and maintenance. For example, the evaluation of Tanzania’s Integrated Water, Sanitation, and Hygiene Program reported on factors that improved sustainability, such as the ease of repairing a rope pump. The Tanzania evaluation also noted challenges related to capacity building for community water committees. The evaluation assessed 10 of 26 committees as having “fair” usage and maintenance, where community fees were generally not collected or maintenance was spotty, and rated 6 of the 26 as “poor” for underperformance relative to the rest of the project. In addition to the contact named above, Emil Friberg (Assistant Director), Mona Sehgal (Assistant Director), Lisa Helmer, Mitchell Delaney, Jesse Elrod, Reid Lowe, Bethany Patten, Steven Putansu, and Monica Savoy made significant contributions to this report. Mark Dowling, Jon Melhus, and Ozzy Trevino provided technical assistance.
Millions of people in developing countries lack access to safe water and improved sanitation. Congress passed the Senator Paul Simon Water for the Poor Act of 2005 to improve access to safe water and sanitation for developing countries. In 2013, USAID released its first Water and Development Strategy , which includes the objective of improving health through sustainable WASH. GAO was asked to review USAID's WASH efforts. Focusing on WASH activities in 9 selected countries, this report (1) describes recent activities and funding, (2) assesses USAID missions' efforts to plan and implement activities, and (3) assesses USAID's monitoring of activities. GAO selected a nongeneralizable sample of 9 countries from USAID's list of 22 priority WASH countries. These 9 countries received about 53 percent of funding attributed to WASH for fiscal years 2012 and 2013. GAO also selected 16 activities for detailed review in the 9 countries, primarily on the basis of levels of funding. GAO analyzed USAID WASH funding data for fiscal years 2012 through 2014 and reviewed agency documents, interviewed mission officials, and visited sites in 2 African countries. U.S. Agency for International Development (USAID) missions in the 9 countries GAO selected for its review reported implementing a variety of water supply, sanitation, and hygiene (WASH) activities in fiscal years 2012 through 2014. WASH activities included capacity building, behavior-change communication, infrastructure construction, technical assistance, policy and governance, and financing. The missions' funding for WASH activities in these countries ranged from $4.4 million to $53.4 million. Note: Funding shown generally represents allocations for activities through Sept. 2014. USAID missions in these 9 countries are taking steps to develop and implement plans for WASH activities, with some missions making more progress than others. These missions are also generally taking steps to address long-term sustainability when planning WASH activities, as directed by USAID guidance, including the Water and Development Strategy . USAID is in the process of developing additional guidance to help all its missions address the sustainability of WASH activities. The completeness and accuracy of USAID's monitoring of WASH activities varied in the 9 selected countries. GAO found that, inconsistent with agency guidance, these missions did not (1) consistently set annual targets for 6 of 16 WASH activities, (2) disaggregate beneficiaries by gender for 6 of 10 water supply and sanitation activities, (3) verify the accuracy of beneficiary data for 3 of 10 water supply activities, and (4) report accurate numbers of beneficiaries for 6 of 8 sanitation activities. Mission officials cited a variety of reasons for adhering inconsistently with agency guidance in some instances and in others the reasons for inconsistent adherence were not clear. These limitations in the completeness and accuracy of monitoring information for WASH activities may inhibit the effectiveness of USAID's oversight of such activities and affect its ability to accurately report on progress in increasing access to safe water and sanitation. GAO recommends that USAID take steps to improve monitoring and reporting of WASH activities, by identifying and addressing reasons for missions' inconsistent adherence with agency guidance. USAID generally concurred with the recommendations and, in particular, outlined steps it is taking to address the report's second recommendation.
The Launching Our Communities’ Access to Local Television Act of 2000 created a guaranteed loan program to facilitate access to signals of local television stations for households located in nonserved and underserved areas of the United States. The Act established the LOCAL Television Loan Guarantee Board (Board) whose primary function is to approve loan guarantees to finance projects to provide local television access for communities in remote areas throughout the United States. The Board is authorized to approve loan guarantees up to 80 percent of the aggregate value of each loan. The Board may not approve loan guarantees after December 31, 2006, and the aggregate of all loans guaranteed may not be more than $1.25 billion. The repayment of the loan(s) is required to be made with a term of the lesser of 25 years from the date of the execution of the loan or the economically useful life of the primary assets to be used in the delivery of the signal involved. The Act set forth specific provisions and requirements for the Board to implement this new program. Specifically, the Act required the Board to: (1) direct the Administrator to prescribe regulations within 120 days after the Congress appropriated funds, (2) develop underwriting criteria in consultation with the Director, Office of Management and Budget (OMB) and an independent public accounting firm (IPA) within 120 days after the Congress appropriated funds, (3) establish and collect loan application and loan guarantee origination feesto offset the cost of administering the Program under the Act, including the costs of the Board and the Administrator, and (4) consider other numerous specialized technical and business requirements prior to approving a loan guarantee. In addition to developing the regulations, the Act directed RUS, an agency of the Department of Agriculture’s Rural Development, to issue and administer loan guarantees that have been approved by the Board. This is consistent with RUS’s mission of administering loan and grant programs, including those to finance projects so rural areas can have, among other things, more modern affordable electricity, telecommunications, public water, and waste removal services. Based on authority granted in the Act, the Board established a Working Group, consisting of senior level officials from the various departments and agencies that represent the Board, to assist it with activities to implement the Program. The costs incurred by the Working Group members to support the Board have been borne by the respective departments and agencies from within their existing budgetary resources (i.e., salaries and expense appropriations or accounts). Although the Act was passed on December 21, 2000, which required the establishment of program regulations and underwriting criteria, initial funding for the Program was not provided until November 2001 through the Agriculture, Rural Development, Food and Drug Administration, and Related Agencies Appropriations Act, 2002. The Act provided $258 million in loan guarantee authority and $2 million for administrative expenses. Later in the fiscal year, two additional pieces of legislation resulted in USDA receiving approximately a combined $1.07 billion in loan guarantee authority available for providing access to local TV stations through direct broadcast satellite (DBS) or some other means. Figure 1 illustrates the relationships between the Congress, federal entities involved in implementing the LOCAL TV Program, and the public. To determine how the provisions of the Act were administered, we focused primarily on program activities and related obligations and administrative expenses that were incurred on behalf of the Program during fiscal year 2002. We analyzed the LOCAL TV Act to obtain an understanding of its provisions and reviewed legislation concerning the Program’s funding. We obtained and evaluated information from the LOCAL TV Board including its internal operating regulations, minutes from Board meetings, the IPA’s technical and price proposals, the solicitation to obtain information related to the legal advisory services for the Board, and other budget and cost information to obtain an understanding of the activities that occurred to implement the Program during fiscal year 2002. We reviewed OMB circulars and federal accounting standards, as applicable. We did not independently verify or audit the cost data we obtained from the Board. We did not review the proposed regulations or draft underwriting criteria because they were not made available to us while OMB was completing its review. We conducted our work from February 2003 through August 2003 in accordance with generally accepted government auditing standards. We requested comments on a draft of this report from the Chairman of the Board and the Department of Agriculture. The Department of Agriculture chose to have the Board incorporate its views into the Board’s overall response. The Board’s comments are discussed in the Agency Comments and Our Evaluation section of this report and are reprinted in appendix I. The Board also provided technical comments on our draft report, which we incorporated as appropriate. Under the requirements of the authorizing legislation and the timing of the available appropriation, the Board was to have had the program regulations and underwriting criteria completed and ready to implement within 120 days after funding was available. Since funds were appropriated in November 2001, the target time frame was March 2002. However, as of the end of August 2003, neither of these key documents had been finalized. Since these documents provide the overall framework for the Program, including operating procedures and lending criteria, lending activities cannot proceed. Figure 2 provides a chronology of the key activities pertaining to the Act and its implementation as discussed in the following paragraphs. The Act established the Board for the primary purpose of approving loan guarantees. Further, the Act required, prior to the Board’s approving loan guarantees, that (1) the Board approve regulations prescribed by RUS that provide the overall operating procedures for the Program, and (2) the Board, in consultation with the Director, OMB, and an independent public accounting firm, develop underwriting criteria relating to the guarantees, including appropriate collateral and cash flow levels. Each of these key documents was to be completed 120 days after program funding was provided, which, given the timing of the appropriations, would have been over a year ago. According to Board and RUS officials, three factors contributed to program delays: (1) initial uncertainties over program funding, (2) inadequate dedicated staff resources for program activities, and (3) the decision to issue a proposed rule. Each of these reasons is discussed in the following paragraphs. In the fiscal year 2002 appropriation approved in November 2001, the Congress provided $258 million in initial loan guarantee authority for the Program and $2 million for administrative costs. RUS officials told us that they had deferred action on developing the Program at that time because the $258 million in loan guarantee authority was insufficient to fund the technology needed to implement the Program. In April 2002, RUS issued a Notice of Inquiry in the Federal Register to obtain information needed to assist in drafting the proposed regulations such as changes in technology or new developments in the industry. In the notice, RUS specifically requested comments on the proposed merger of two major DBS providers that, if approved, could have noticeably affected the Program and virtually fulfilled the Act’s purpose. However, any substantial movement on the Program was delayed until the Farm Bill was passed on May 13, 2002,when RUS believed that sufficient funding for the Program was available. The Board determined that it needed the $2 million in appropriated funds to procure the statutorily required IPA as well as other outside consultants and experts needed to implement and administer the Program. Therefore, the Working Group members have been supporting the Board as a collateral duty. Because the members have been unable to focus exclusively on Board activities, this resulted in further program delays. The Board held its first meeting on September 13, 2002, and on September 26, 2002, awarded a $677,000 contract to Ernst and Young, an independent public accounting firm, to assist in drafting the underwriting criteria. As of the end of fiscal year 2002, approximately $1.3 million of the $2 million remained available for contracting with outside consultants. The third contributing factor to the delay of the Program was the Board’s September 2002 decision to issue a proposed rule to provide the public an opportunity to comment on the proposed regulations to ensure that the Program’s objectives and mission were consistent with congressional intent. Although the Act did not explicitly require formal rulemaking procedures, the Board believed it necessary given the complex and precedential issues raised in the statute. On February 7, 2003, the Board submitted the underwriting criteria to OMB for consultation. The first draft of the proposed operating regulations was submitted to OMB on May 5, 2003. OMB approved the draft regulations on August 8, 2003, and the Board issued the proposed rule in the Federal Register on August 15, 2003, with a closing date of September 15, 2003. The Board will issue a final rule after considering and incorporating comments from the public and receiving OMB’s approval of any revisions to the proposed rule. The Board plans to begin accepting loan guarantee applications once the final rule is issued. The Board stated they believe this process will begin by February 2004. Total costs of administering the Program, including those incurred by the respective departments and agencies providing support to the Board, were not accumulated and charged to the Program. Statement of Federal Financial Accounting Standard No. 4, Managerial Cost Accounting Standards (SFFAS No. 4) requires federal agencies to capture the costs of federal programs to assist the Congress in authorizing, modifying, and discontinuing programs and to provide agencies with reliable cost data for making informed managerial decisions and evaluating performance. Also, if relevant costs of administering the Program are not accumulated, the Board will not be able to support the establishment of loan application and loan guarantee origination fees that are sufficient to recover, but not exceed, certain costs of administering the Program. According to SFFAS No. 4, costs of federal resources required by programs are an important factor in making policy decisions related to program authorization, modification, and discontinuation. SFFAS No. 4 also states that to fully account for the costs of the goods and services they produce, reporting entities should include the cost of goods and services received from other entities. Further, the standard states that, “Ideally, all inter- entity costs should be recognized. This is especially important when those costs constitute inputs to government goods or services provided to non- federal entities for a fee or user charge. The fees and user charges should recover the full costs of those goods and services.” During fiscal year 2002, the Board did not have a process in place to fully accumulate and report costs, including those of the IPA, the Board, and Working Group in conformance with SFFAS No. 4. As mentioned earlier, in fiscal year 2002, the Congress appropriated $2 million for costs to implement the Program, which the Board decided to use exclusively for an IPA and other consulting services. During fiscal year 2002, the Working Group participated in a number of organizational meetings, coordinated the Board’s initial meeting, and participated as technical evaluation staff on the procurement for the IPA. The Working Group also worked with Ernst and Young to develop the underwriting criteria and with the Board to assist in the development of the program regulations and other procurement activities. Because the Board did not request additional funding in fiscal year 2002 to support Working Group activities, the respective departments and agencies of the Working Group members absorbed these costs. We requested that the Board estimate the costs that the Working Group incurred during fiscal year 2002 in support of the Program’s administrative activities. The Board estimated that the Working Group incurred $78,000 in administrative expenses. Table 1 provides a summary of these cost estimates. Without accumulating and reporting the costs of administering the Program, the Board will not comply with SFFAS No. 4 or have the cost information needed to make informed decisions about the Program. The Board acknowledged that if the costs incurred by the Working Group were accumulated and reported, it would more accurately reflect the total cost of this program. More importantly, the Act directed the Board to charge and the Administrator to collect loan guarantee application and origination fees to cover, but not exceed, certain costs of administering the Program such as reviewing and approving applications. The Board has proposed in its draft regulations an application fee of $10,000 to $40,000, depending on the size of the loan, and a loan guarantee origination fee equal to the lesser of 2 percent of the loan amount or $500,000. Without knowing the costs of administering the Program, the Board cannot determine whether the aggregate amount of fees collected is sufficient to recover, but not exceed, certain costs of administering the Program. It is expected that the Board will approve a small number of loans; therefore, it has a limited opportunity to charge the appropriate fees. The LOCAL TV Program has not been implemented within the time frames specified in the LOCAL TV Act. Notwithstanding considerable delays already incurred, it is important that the Board begin to put the Program in operation in an expedient fashion. Further delays in completing regulations and underwriting criteria will postpone lending activities necessary to carry out the Program. Additionally, without instituting cost accounting practices in conformance with federal accounting standards, the Board will not have the information needed to manage and report on the Program or to support the full recovery of certain Program costs. If the Board does not set adequate fees, a government subsidy to program applicants may result. To help ensure future timely implementation of the Program, we recommend that the Board and the Administrator work collaboratively to issue the Program regulations and underwriting criteria in an expeditious manner. To help ensure better program management and that loan application and loan guarantee origination fees are sufficient to fully cover certain costs of administering the Program, we recommend that the Board and the Administrator develop a process to ensure that future costs of the Program are accumulated, documented, and reported in accordance with federal accounting standards and related guidance. In written comments on a draft of this report, the Board described its plans for implementing our recommendations. The Board stated that it continues to work with the Administrator and every effort is being made to ensure that the Program regulations and underwriting criteria are issued expeditiously. Further, the Board informed us that as it begins accepting applications, it will ensure that administrative expenses adhere to managerial cost accounting concepts in accordance with federal accounting standards and related guidance. The Board also provided technical comments on our draft report, which we incorporated as appropriate. We are sending copies of this report to the Secretaries of Agriculture, Commerce, and Treasury, and the Chairman of Board of Governors of the Federal Reserve System, members of the Local Television Loan Guarantee Board, and the Director, Office of Management and Budget. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. Should you or your staff have any questions on matters discussed in this report, please contact me at (202) 512-6906 or by email at williamsm1@gao.gov or Alana Stanfield, Assistant Director, at (202) 512-3197 or stanfielda@gao.gov. Major contributors to this report are acknowledged in appendix II. In addition to those named above the following individuals made important contributions to this report: Lisa Crye, Jeff Isaacs, Jeff Jacobson, Jason Kelly, Hannah Laufe, and Christina Quattrociocchi. The General Accounting Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to e-mail alerts” under the “Order GAO Products” heading.
The LOCAL TV Act required that GAO perform an annual audit of the (1) administration of the provisions of the Act, and (2) financial position of each applicant who receives a loan guarantee under the Act, including the nature, amount, and purpose of investments made by the applicant. In fiscal year 2002, the LOCAL TV Program was funded; however, because it was not fully implemented in that year, there were no loan guarantee applicants for GAO to audit. Therefore, this report primarily addresses whether program administration during fiscal year 2002 satisfied the provisions of the Act. In December 2000,the Congress passed the Launching Our Communities' Access to Local Television Act of 2000 (LOCAL TV Act or Act). The Act created the Local Television Loan Guarantee Program (Program or LOCAL TV Program) and established the Local Television Loan Guarantee Board (Board) to approve guaranteed loans, totaling no more than $1.25 billion, to finance projects that will provide local television access to households with limited over-the-air television broadcast signals or cable service. The Board is comprised of the Secretary of the Treasury, the Chairman of the Board of Governors of the Federal Reserve System, the Secretary of Agriculture, and the Secretary of Commerce, or their designees. The Department of Agriculture (USDA) Rural Utilities Service serves as Program Administrator (Administrator). The LOCAL TV Program has not been established in an expeditious fashion as specified by the Act. Given that funds were appropriated in November 2001, thus starting the clock on the 120 days allowed for completing program regulations and underwriting criteria, the Program should have been ready for implementation by March 2002. According to the Board and Administrator, three factors contributed to program delays: (1) initial uncertainties over program funding, (2) inadequate dedicated staff resources for program activities, and (3) the decision to issue a proposed rule. As of the end of August 2003, neither of these key documents, which provide the overall framework for the Program, was ready for implementation, thus delaying lending activities and ultimately, realization of improved television reception in target areas throughout the United States. Further, the full costs of administering the Program, including those incurred by the respective agencies and departments providing support to the Board, were not accumulated and charged to the program as called for by federal accounting standards. Statement of Federal Financial Accounting Standard No.4, Managerial Cost Accounting Standards requires federal agencies to capture the costs of federal programs to assist the Congress in authorizing, modifying, and discontinuing programs and to provide agencies with reliable cost data for making informed managerial decisions and evaluating performance. Further, the capacity to capture these costs going forward is key to fully recovering certain costs of administering the Program through loan application and loan guarantee origination fees.
In our June 2015 and April 2016 reports examining CMS screening procedures, we found weaknesses in CMS’s verification of provider practice location, physician licensure status, providers listed as deceased or excluded from participating in federal programs or health care–related programs, and criminal-background histories. These weaknesses may have resulted in CMS improperly paying thousands of potentially ineligible providers and suppliers. We made recommendations to address these weaknesses, which CMS has indicated it has implemented or is taking steps to address. Additionally, as a result of our work, we referred 597 unique providers and suppliers to CMS. According to CMS officials, they have taken some actions to remove or recover overpayments from the potentially ineligible providers and suppliers we referred to them in April 2015 and April 2016, but CMS’s review and response to the referrals are ongoing. In our June 2015 report, we found thousands of questionable practice location addresses for providers and suppliers listed in PECOS, as of March 2013, and DMEPOS suppliers, listed as of April 2013. Under federal regulations, providers and suppliers must be “operational” to furnish Medicare covered items or services, meaning that they have a qualified physical practice location that is open to the public for the purpose of providing health care–related services. The location must be properly staffed, equipped, and stocked to furnish these items or services. Addresses that generally would not be considered a valid practice location include post office boxes, and those associated with a certain type of CMRA, such as a United Parcel Service (UPS) store. We checked PECOS practice location addresses for all records that contained an address using the USPS address-management tool, a commercially available software package that standardizes addresses and provides specific flags on the address such as a CMRA, vacant, or invalid address. As illustrated in figure 1, on the basis of our analysis of a generalizable stratified random sample of 496 addresses, we estimate that about 23,400 (22 percent) of the 105,234 addresses we initially identified as a CMRA, vacant, or invalid address are potentially ineligible addresses. About 300 of the addresses were CMRAs, 3,200 were vacant properties, and 19,900 were invalid. Of the 23,400 potentially ineligible addresses submitted as practice locations, we estimate that, from 2005 to 2013, about 17,900 had no claims associated with the address, 2,900 were associated with providers that had claims that were less than $500,000 per address, and 2,600 were associated with providers that had claims that were $500,000 or more per address. Because some providers are associated with more than one address, it is possible that some of the claim amounts reported may be associated with a different, valid practice location. Due to how we compiled claims by the National Provider Identifier, we were unable to determine how much, if any, of the claim amount may be associated with a different, valid address. In our June 2015 report, we found limitations with CMS’s Finalist software used to validate practice location addresses. The Finalist software is one technique used by the Medicare Administrative Contractors (MAC) and the National Supplier Clearinghouse (NSC) to validate a practice location. According to CMS, Finalist is integrated into PECOS to standardize addresses and does so by comparing the address listed on the application to USPS records and correcting any misspellings in street and city names, standardizing directional markers (such as NE or West) and suffixes (such as Ave. or Lane), and correcting errors in the zip code. However, the Finalist software does not indicate whether the address is a CMRA, vacant, or invalid address—in other words, whether the location is potentially ineligible to qualify as a legitimate practice location. CMS does not have these flags in Finalist because the agency added coding in PECOS that prevents post office box addresses from being entered, and believed that this step would prevent these types of ineligible practice locations from being accepted. Further, some CMRA addresses are not listed as post office boxes. For example, in our June 2015 report we identified 46 out of the 496 sample addresses that were allowed to enroll in Medicare with a practice location that was inside a mailing store similar to a UPS store. These providers’ addresses did not appear in PECOS as a post office box, but instead were listed as a suite or other number, along with a street address. Figure 2 shows an example of one provider we identified through our search and site visits as using a mailbox-rental store as its practice location and where services are not actually rendered. This provider’s address appears as having a suite number in PECOS and remained in the system as of January 2015. According to our analysis of CMS records, this provider was paid approximately $592,000 by Medicare from the date it enrolled in PECOS with this address to December 2013, which was the latest date for which CMS had claims data at the time of our review. Our June 2015 report also found locations that were vacant or addresses that belonged to an unrelated establishment. For example, we visited a provider’s stated practice location in December 2014 and instead found a fast-food franchise there (see fig. 3—the name of the franchise has been blurred). In addition, we found a Google Maps image dated September 2011 that shows this specific location as vacant. Although the provider was not paid by Medicare from the date this practice location address was flagged as vacant, by remaining actively enrolled into PECOS, the provider may be eligible to bill Medicare in the future. In March 2014, CMS issued guidance to the MACs that revised the practice location verification methods by requiring MACs to only contact the person listed in the application to verify the practice location address and use the Finalist software that is integrated in PECOS to standardize the practice location address. Additional verification, such as using 411.com and USPS.com, which was required under the previous guidance, is only needed if Finalist cannot standardize the actual address. In our June 2015 report, we noted that our findings suggest that the revised screening procedure of contacting the person listed in the application to verify all of the practice location addresses may not be sufficient to verify such practice locations. For example, two providers in our sample of 496 addresses that the USPS address-management tool flagged as CMRA, invalid, or vacant successfully underwent a MAC revalidation process in 2014. The MAC used the new procedure of calling the contact person to verify the practice location. Each of these two providers had a UPS or similar store as its practice location. To help further improve CMS’s enrollment-screening procedures to verify applicants’ practice location, we made two recommendations to CMS in our June 2015 report. First, we recommended that CMS modify the CMS software integrated into PECOS to include specific flags to help identify potentially questionable practice location addresses, such as CMRA, vacant, and invalid addresses. The agency concurred with this recommendation. On May 16, 2016, CMS provided us with supporting documentation that shows that the agency replaced its current PECOS address verification software to include Delivery Point Verification (DPV)—which is similar to the software we used when conducting the work in the June 2015 report—as an addition to the existing functionality. According to CMS officials, this new DPV functionality flags addresses that may be CMRA, vacant, or invalid. By updating the address verification software, CMS can ensure that providers with ineligible practice location are not listed in PECOS. Second, we recommended in our June 2015 report that CMS revise its guidance for verifying practice locations to include, at a minimum, the requirements contained in the guidance in place prior to March 2014. Such a revision would require that MACs conduct additional research, beyond phone calls to applicants, on the practice location addresses that are flagged as a CMRA, vacant, or invalid address to better ensure that the address meets CMS’s practice location criteria. The agency did not concur with this recommendation, stating that the March 2014 guidance was sufficient to verify practice locations. However, our audit work shows that additional checks on addresses flagged by the address-matching software as a CMRA, vacant, or invalid can help verify whether the addresses are ineligible. As our report highlighted, we identified providers with potentially ineligible addresses that were approved by MACs using the process outlined in the existing guidance. Therefore, we continue to believe that the agency should update its guidance for verifying potentially ineligible practice locations. In February 2016, CMS officials told us that, as part of configuring the PECOS address verification software to include the DPV functionality and flag CMRAs, vacancies, invalid addresses, and other potentially questionable practice locations, the agency plans to validate the DPV through site visits and follow its current process to take administrative action if the results are confirmed. CMS officials told us that they believe that by implementing the first recommendation by incorporating software flags and revising its guidance for verifying potentially ineligible practice location, if necessary, the second recommendation will be addressed. As of May 17, 2016, CMS had not provided us with details and supporting documentation of how it will revise its guidance. Accordingly, it is too early for us to determine whether the agency’s actions would fully address the intent of the recommendation. We plan to continue to monitor the agency’s efforts in this area. CMS has taken some actions to remove or recover overpayments from potentially ineligible providers and suppliers that we referred to it, based on our June 2015 report. On April 29, 2015, we referred 286 unique providers to CMS for further review and action as a result of our identification of providers with potentially ineligible practice location address. From August 2015 to May 2016, CMS has provided updates on these referrals. On the basis of our analysis of CMS’s updates, CMS has taken the following actions: taken administrative action to remove the provider or collect funds for 29 of the providers, corrected the invalid addresses for 70, determined that the questionable location was actually valid for 84, determined that the provider had already been removed from the program for 102. However, CMS did not take action on 1 provider because it was unable to find the practice location for this provider. In our June 2015 report, we found 147 out of about 1.3 million physicians with active PECOS profiles had received a final adverse action from a state medical board, as of March 2013. Adverse actions include crimes against persons, financial crimes, and other types of health care–related felonies. These individuals were either not revoked from the Medicare program until months after the adverse action or never removed (see fig. 4). All physicians applying to participate in the Medicare program must hold an active license in the state they plan to practice in and also to self- report final adverse actions, which include a license suspension or revocation by any state licensing authority. CMS requires MACs to verify final adverse actions that the applicant self-reported on the application directly with state medical board websites. We found that because physicians are required to self-report adverse actions, the MACs did not always identify unreported actions when enrolling, revalidating, or performing monthly reviews of the provider. As a result, 47 physicians out of the 147 physicians we identified as having adverse actions have been paid approximately $2.6 million by the Medicare program during the time CMS could have potentially barred them from the Medicare program between March 29, 2003, and March 29, 2013. Some of the adverse actions that were unreported by physicians occurred within the state where the provider enrolled in PECOS, while others occurred in different states. For example, we identified a physician who initially enrolled into Medicare in 1985 and was suspended for about 5 months in 2009 by the Rhode Island medical board. In 2011, his information was revalidated by the MAC. This provider did not self-report the adverse action, and the MAC did not identify it during its monthly reviews or when revalidating the provider’s information. CMS bars providers that are already enrolled in Medicare who do not self-report adverse actions for 1 year. This individual billed Medicare for about $348,000 during the period in which he should have been deemed ineligible. CMS officials highlighted that delays in removing physicians from Medicare may occur due to MAC backlogs, delays in receipt of data from primary sources, or delays in the data-verification process. In March 2014, CMS began efforts to improve the oversight of physician license reviews by providing the MACs with a License Continuous Monitoring report, which was a good first step. However, the report only provides MACs with the current status of the license that the provider used to enroll in the Medicare program. Without collecting license information on all medical licenses, regardless of the state the provider enrolled in, we concluded that CMS may be missing an opportunity to identify potentially ineligible providers who have license revocations or suspensions in other states, which can put Medicare beneficiaries at risk. To help improve the Medicare provider enrollment-screening procedures, in our June 2015 report we recommended that CMS require applicants to report all license information including that obtained from other states, expand the License Continuous Monitoring report to include all licenses, and at least annually review databases, such as that of FSMB, to check for disciplinary actions. The agency concurred with the recommendation, but stated it does not have the authority to require providers to report licenses for states in which they are not enrolled. While providers are not currently required to list out-of-state license information in the enrollment application, CMS can independently collect this information by using other resources. Therefore, we clarified our recommendation to state that CMS should collect information on all licenses held by providers that enroll into PECOS by using data sources that contain this information, which is similar to the steps that we took in our own analyses. In February 2016, CMS officials told us that CMS will take steps to ensure that all applicants’ licensure information is evaluated as part of the screening process by MACs and the License Continuous Monitoring report, as appropriate, and will also regularly review other databases for disciplinary actions against enrolled providers and suppliers. In May 2016, CMS officials stated that CMS has established a process to annually review databases and has incorporated the FSMB database into its screening process. On May 19, 2016, CMS officials provided us with supporting documentation that shows that the FSMB database was incorporated into its automatic screening process. By incorporating the FSMB database into its automatic screening process, CMS will be able to regularly check this database for licensure updates and disciplinary actions against enrolled providers and suppliers, as well as to collect all license information held by providers that apply to enroll in PECOS. On April 29, 2015, we referred the 147 unique providers to CMS for further review and action as a result of our identification of revoked licenses. On the basis of our analysis of CMS’s updates as of May 2016, CMS has taken the following actions: taken administrative action to remove the provider or collect funds for 21 providers, determined that the provider had already been removed from the determined that the adverse actions were disclosed or partially disclosed for 71, and has ongoing reviews of 6. CMS did not take action on 1 provider because it was unable to find the adverse action for this provider. In our June 2015 report, we found that about 460 (0.03 percent) out of the 1.7 million unique providers and suppliers in PECOS as of March 2013 and DMEPOS suppliers as of April 2013 were identified as deceased at the time of the data we reviewed. The MAC or CMS identified 409 of the 460 providers and suppliers as deceased from March 2013 to February 2015. Additionally, 38 out of the 460 providers and suppliers we found to be deceased were paid a total of about $80,700 by Medicare for services performed after their date of death until December 2013, which was the most-recent date CMS had Medicare claims data available at the time of our review. Not identifying a provider or supplier as deceased in a timely manner exposes the Medicare program to potential fraud. It is unclear what caused the delay or omission by CMS and the MACs in identifying these individuals as deceased or how many overpayments they are in the process of recouping. On April 29, 2015, we referred 82 unique providers to CMS for further review and action as a result of our identification of providers whose status was deceased. From August 2015 to May 2016, CMS has provided updates on these referrals. On the basis of our analysis of CMS’s updates, CMS has taken the following actions: taken administrative action to remove the provider for 4 of the providers, determined that the provider had already been removed from the determined that the provider had already been removed from the program but updated the provider’s PECOS profile to reflect the date of death for 22, and started but not completed the review on 31 providers that were reported to be deceased and had submitted claims for payments. We found in our June 2015 report that about 40 (0.002 percent) out of the 1.7 million unique providers and suppliers enrolled in PECOS were listed in LEIE, as of March 2013. These individuals were excluded from participating in health care–related programs. Of those 40 excluded providers and suppliers, 16 were paid approximately $8.5 million by Medicare for services rendered after their exclusion date until the MAC or the NSC found them to be excluded. When we followed up with the MACs in September and October 2014, we found that the MACs had removed 38 of the 40 providers and suppliers from PECOS from March 2013 to October 2014. However, for two matches that we identified, the MACs had not taken any action. Given the small number of cases identified (40) and the MACs’ removal of 38 out of these 40 providers during our review, we did not make a recommendation to CMS. On April 29, 2015, we referred the two providers that the MACs did not remove, as well as the 16 providers that were paid $8.5 million by Medicare for services rendered after their exclusion date, to CMS for further review and action. From August 2015 to May 2016, CMS has provided updates on these referrals. On the basis of our analysis of CMS’s updates, CMS did not take action on 2 providers because CMS deemed the providers eligible. Further, CMS has not completed the review on 14 providers that were reported to be excluded and had submitted claims for payments. As part of CMS’s enrollment-screening process, CMS has controls in place to verify criminal-background information for providers and suppliers in PECOS. CMS may deny or revoke a provider’s or supplier’s enrollment in the Medicare program if, within the 10 years before enrollment or revalidation of enrollment, the provider, supplier, or any owner or managing employee of the provider or supplier was convicted of a federal or state felony offense, including certain felony crimes against persons, that CMS has determined to be detrimental to the best interests of the program and its beneficiaries. In our April 2016 report, we found that 16 out of 66 potentially ineligible providers we identified with criminal backgrounds received $1.3 million in potential overpayments. These providers were convicted of drug and controlled substance, health-care, mail and wire fraud, or sex-related offenses and were enrolled in Medicare before CMS had implemented more-extensive background check processes in April 2014. Before CMS revised procedures for reviewing the criminal backgrounds of existing and prospective Medicare providers and suppliers in April 2014, the agency relied on verifying applicants’ self-reported adverse legal actions by checking whether providers and suppliers had previously lost their licenses because of a conviction such as a crime against a person. CMS also checked whether the HHS OIG had excluded providers and suppliers from participating in federal health-care programs. According to CMS, it also relied on Zone Program Integrity Contractors (ZPIC) to identify providers and suppliers with a conviction history. However, CMS did not always have access to federal or state offense information that identified the cause of a provider’s or supplier’s license suspension or exclusion from participating in federal health-care programs, which could have led to an earlier ineligibility date. In our April 2016 report, we found 52 providers whose offenses occurred before the removal effective date that was provided to us by the MACs and 14 additional providers that CMS did not remove. As mentioned earlier, out of these 66 providers, 16 were paid about $1.3 million by Medicare through the fee-for-service program. Specifically, 10 providers were paid about $1.1 million between the time they were initially convicted of a crime and the time that they were officially removed from the program, and six other providers that were not removed were paid about $195,000 during the year after their conviction. We referred all 66 cases to CMS for further review and requested an initial status update on these providers by June 20, 2016. On May 16, 2016, CMS stated that it determined that 52 of the providers had already been deactivated or revoked; however, our report indicates that these providers were deactivated and revoked and the effective removal date needed review. Further, CMS indicated that it will continue to review these providers to determine whether additional updates or actions are needed since we found that theses providers had offenses that occurred before the removal effective date that was provided by the MACs. Further, CMS informed us that it will continue to review the remaining 14 providers. Additionally, in April 2016, we reported that in April 2014 CMS implemented steps that provide more information on the criminal backgrounds of existing and prospective Medicare providers and suppliers than it obtained previously. Specifically, CMS supplemented its criminal-background controls by screening provider and supplier criminal backgrounds through an automated screening process. Under this revised process, MACs are to review an applicant’s self-reported license information and whether the applicant has been excluded from participating in federal health-care programs. In addition, CMS receives information from ZPICs, which provide a conviction history on providers and suppliers they investigate. The automated-screening contractor is to supplement these controls by conducting criminal-background checks on providers, suppliers, and organization principals (i.e., individuals with 5 percent or more ownership in the business). The contractor uses third- party vendor applications available to the public to conduct the criminal- background checks. As a result, CMS and its contractors obtain greater access to data about federal and state offenses and the ability to conduct a more-comprehensive review of provider and supplier criminal backgrounds than in the past. Chairman Murphy, Ranking Member DeGette, and Members of the subcommittee, this concludes my prepared remarks. I look forward to answering any questions that you may have at this time. If you or your staff have any questions about this testimony, please contact me at (202) 512-6722 or bagdoyans@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals making key contributions to this testimony were Latesha Love, Assistant Director; Gloria Proa; Ariel Vega; and Georgette Hagans. Additionally, Marcus Corbin; Colin Fallon, and Maria McMullen provided technical support; Shana Wallace, Jim Ashley, and Melinda Cordero provided methodological guidance; and Brynn Rovito and Barbara Lewis provided legal counsel. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
In fiscal year 2015, Medicare paid $568.9 billion for health care and related services. CMS estimates that $59.6 billion (about 10.5 percent) of that total was paid improperly. To establish and maintain Medicare billing privileges, providers and suppliers must be enrolled in a CMS database known as PECOS. About 1.9 million providers and suppliers were in PECOS as of December 2015, according to CMS. GAO published reports in June 2015 and April 2016 that examined Medicare's provider and supplier enrollment-screening procedures to determine whether PECOS was vulnerable to fraud. This testimony discusses the extent to which selected enrollment-screening procedures are designed and implemented to prevent and detect the enrollment of ineligible or potentially fraudulent Medicare providers and suppliers into PECOS. In its reports, GAO matched providers and suppliers in PECOS, as of March 2013, to several databases to identify potentially ineligible providers and suppliers, and used Medicare claims data to verify whether they were paid during this period. GAO also examined relevant documentation, interviewed CMS officials, and obtained information from the CMS contractors that evaluate provider applications. From August 2015 through May 2016, GAO obtained updated information from CMS staff and reviewed documents related to actions. CMS has taken or has plans to take some actions to address all of GAO's recommendations and referrals of potentially ineligible providers and suppliers. In June 2015 and April 2016, GAO reported on CMS's implementation of enrollment-screening procedures that the Centers for Medicare & Medicaid Services (CMS) uses to prevent and detect ineligible or potentially fraudulent providers and suppliers from enrolling into its Provider Enrollment, Chain and Ownership System (PECOS). GAO identified weaknesses in CMS's verification of provider practice location, physician licensure status, and criminal-background histories. These weaknesses may have resulted in CMS improperly paying thousands of potentially ineligible providers and suppliers. Specifically, in June 2015, GAO's examination of 2013 data found that about 23,400 (22 percent) of 105,234 of practice location addresses were potentially ineligible. The computer software CMS used as a method to validate applicants' addresses did not flag potentially ineligible addresses, such as those that are of a Commercial Mail Receiving Agency (such as a UPS store mailbox), vacant, or invalid. GAO recommended that CMS incorporate flags into its software to help identify potentially questionable addresses, among other things. CMS concurred with this recommendation and has replaced the PECOS address verification software. Also, in June 2015, GAO found that, as of March 2013, 147 out of about 1.3 million physicians listed in PECOS had received a final adverse action against their medical license from a state medical board for various felonies that may have made them ineligible to bill Medicare. However, they were either not revoked from the Medicare program until months after the adverse action or never removed because CMS only collected information on the medical license numbers providers used to enroll into the Medicare program. CMS also did not collect adverse-action history or other medical licenses a provider may have in other states that were not used to enroll into Medicare. GAO recommended that CMS collect and review additional license information. CMS has incorporated a new database to obtain additional license history. In April 2016, GAO reported on CMS's process to conduct criminal-background checks on Medicare providers and suppliers and found that opportunities exist for CMS to recover about $1.3 million in potential overpayments made to 16 out of 66 potentially ineligible providers with criminal backgrounds. In April 2014, CMS implemented procedures to obtain greater access to data to verify criminal backgrounds of existing and prospective Medicare providers and suppliers than it obtained previously; however, the results of GAO's review of the 2013 data identified an opportunity for CMS to recover potential overpayments that were made prior to putting the revised procedures in place. In addition to its actions in response to GAO's recommendations, CMS has taken some actions to remove or recover overpayments from potentially ineligible providers and suppliers that GAO referred to it in April 2015 and April 2016, but its review and response to the referrals are ongoing.
For many years we have advocated the use of a risk management approach that entails managing risk through actions, including setting strategic goals and objectives, assessing risk, allocating resources based on risk, evaluating alternatives, selecting initiatives to undertake, and implementing and monitoring those initiatives. Risk assessment, an important element of a risk management approach, helps decision makers identify and evaluate potential risks so that countermeasures can be designed and implemented to prevent or mitigate the effects of the risks. FPS meets its mission to protect GSA’s federal facilities by assessing the risks that face those facilities and identifying the appropriate countermeasures to mitigate those risks. Despite the importance of this mission, FPS has not implemented an effective risk management program. In August 2010, we reported that FPS does not use a comprehensive risk management approach that links threats and vulnerabilities to resource requirements. Instead, FPS uses a facility-by- facility approach to risk management: we reported in 2010 that FPS assumes that all facilities with the same security level have the same risk regardless of their location. For example, a level IV facility in a metropolitan area is generally treated the same as one in a rural area. This building-by-building approach prevents FPS from comprehensively identifying risk across the entire portfolio of GSA’s facilities and allocating resources based on risk. Both our and DHS’s risk management frameworks include processes for assessing comprehensive risk across assets in order to prioritize countermeasures based on the overall needs of the system. In response to our recommendations in this area, FPS began developing a new system, the Risk Assessment and Management Program (RAMP). According to FPS, RAMP will support all components of the risk assessment process, including gathering and reviewing building information; conducting and recording interviews with GSA and tenant agencies; assessing threats, vulnerabilities, and consequences to develop a detailed risk profile; recommending appropriate countermeasures; and producing facility security assessment (FSA) reports. FPS also plans to use RAMP to track and analyze workforce data, contract guard program data, and other performance data, such as the types and definitions of incidents and incident response times. We are finalizing our ongoing review of FPS’s efforts to develop and implement RAMP as well as FPS’s transition to DHS’s National Protection and Programs Directorate (NPPD) and expect to report on these issues soon. Over the last 3 years we have reported on the challenges FPS has faced in the human capital area since moving to DHS from GSA in 2003. As mandated by Congress, in 2009 FPS increased the size of its workforce to 1,200 full time employees. However, FPS continues to operate without a strategic human capital plan. We recommended in 2009 that FPS develop a human capital plan to guide its current and future workforce planning efforts. We have identified human capital management as a high-risk issue throughout the federal government, including within DHS. A human capital plan is important to both align FPS’s human capital program with current and emerging mission and programmatic goals, and develop effective processes for training, retention, and staff development. In 2009, we reported that the absence of such a plan has contributed to inconsistent human capital activities among FPS regions and headquarters, as several regions told us they have implemented their own processes for performance feedback, training, and mentoring. In addition, we found that FPS’s workforce planning is limited because FPS headquarters does not collect data on its workforce’s knowledge, skills, and abilities. Without such information, FPS is not able to determine what its optimal staffing levels should be or identify gaps in its workforce needs and determine how to modify its workforce planning strategies to fill these gaps. FPS concurred with our recommendation and drafted a workforce analysis plan in June 2010. According to FPS, the plan must be reviewed by the Office of Management and Budget (OMB) before it is subject to approval by the Secretary of Homeland Security. FPS also has yet to fully ensure that its recent move to an inspector- based workforce does not hinder its ability to protect federal facilities. In 2007, FPS essentially eliminated its police officer position and moved to an all inspector-based workforce. FPS also decided to place more emphasis on physical security activities, such as completing FSAs, and less emphasis on law enforcement activities, such as proactive patrol. We reported in 2008 that these changes may have contributed to diminished security and increases in inspectors’ workload. Specifically, we found that when FPS is not providing proactive patrol at some federal facilities, there is an increased potential for illegal entry and other criminal activity. Moreover, under its inspector-based workforce approach, FPS is relying more on local police departments to handle crime and protection issues at federal facilities; however, we previously reported that at approximately 400 federal facilities across the United States, local police may not have the authority to respond to incidents inside those facilities. We recommended in 2008 that FPS clarify roles and responsibilities of local law enforcement agencies in responding to incidents at GSA facilities. While FPS agreed with this recommendation, FPS has decided not to pursue agreements with local law enforcement officials, in part because of local law enforcement officials’ reluctance to sign such agreements. In addition, FPS believes that the agreements are not necessary because 96 percent of the properties in its inventory are listed as concurrent jurisdiction facilities where both federal and state governments have jurisdiction over the property. Nevertheless, we continue to believe that these agreements would, among other things, clarify roles and responsibilities of local law enforcement agencies when responding to crime or other incidents. We are currently reviewing to what extent FPS is coordinating with state and local police departments to ensure adequate protection of federal facilities and will issue a report next year. FPS’s contract guard program is the most visible component of the agency’s operations and the agency relies on its guards to be its “eyes and ears” while performing their duties. Guards are responsible for controlling access to federal facilities by checking the identification of government employees and the public who enter federal facilities, and operating security equipment to screen for prohibited items. Since 2009, we have identified weaknesses in FPS’s contract guard program which hamper its ability to protect federal facilities. For example, we reported in 2009 and in 2010 that FPS does not have a reliable system to ensure that its 13,000 guards have the training and certifications required to stand post at federal facilities or comply with post orders once they are deployed. In 2009, we also identified substantial security vulnerabilities related to FPS’s guard program. In April and May 2009, GAO investigators conducted covert tests and were able to successfully pass components of an improvised explosive device (IED) concealed on their persons through security checkpoints monitored by FPS guards at 10 Level IV facilities in 4 major metropolitan areas. In addition, FPS’s penetration testing—similar to our covert testing—shows that guards continue to have problems with detecting prohibited items. For example, in March 2011, FPS contract guards allowed components for an active bomb to remain in a Level IV federal building in Detroit, Michigan for 3 weeks before a bomb squad was called to remove them. We also found in 2010 that although some guard contractors did not comply with the terms of their contracts, FPS did not take any enforcement action against them. According to FPS guard contracts, a contractor has not complied with the terms of the contract if, for example, the contractor has a guard working without valid certifications or background suitability investigations, or falsifies a guard’s training records. If FPS determines that a contractor does not comply with these contract requirements, it can—among other things—assess a financial deduction for nonperformed work, elect not to exercise a contract option, or terminate the contract for default or cause. We reviewed the official contract files for the 7 contractors who, as we testified in July 2009, had guards performing on contracts with expired certification and training records to determine what action, if any, FPS had taken against these contractors for contract noncompliance. According to the documentation in the contract files, FPS did not take any enforcement action against the contractors for not complying with the terms of the contract. Instead, FPS exercised the option to extend the contracts for these 7 contractors. Additionally, although FPS requires an annual performance evaluation of each guard contractor and at the conclusion of contracts exceeding $100,000, FPS did not always evaluate the performance of its contractors as required, and some evaluations were incomplete and not consistent with contractors’ performance. In response to our recommendations, FPS has taken several steps to improve the oversight of its contract guard program. Since July 2009, FPS has increased its penetration tests in some regions and the number of guard inspections it conducts at federal facilities in some metropolitan areas. Additionally, FPS began the process of providing additional x-ray and magnetometer training for its workforce. Under the new requirement, inspectors must receive 30 hours of x-ray and magnetometer training and guards are required to take 16 hours. Previously, guards were required to receive 8 hours of training on x-ray and magnetometer machines. Finally, FPS expects to use RAMP, once it is developed, to determine whether its 13,000 guards have met its training and certification requirements and to conduct guard inspections. As stated earlier, we are finalizing our review of FPS’s RAMP. We reported in May 2011 that FPS increased its basic security fee 4 times in 6 years to try to cover costs (an increase of over 100 percent). However, FPS has not reviewed its fees to develop an informed, deliberate fee design. We found that timely, substantive fee reviews are especially critical for fee-funded agencies to ensure that fee collections and operating costs remain aligned. FPS has broad authority to design its security fees, but the current fee structure has consistently resulted in total collection amounts less than agency costs, is not well understood or accepted by tenant agencies, and continues to be a topic of congressional interest and inquiry. In 2008, we recommended that FPS evaluate whether its use of a fee- based system or an alternative funding mechanism is the most appropriate manner to fund the agency. Although FPS agreed with this recommendation it has not begun such an analysis. Based on our updated work in 2011, we recommended that such an analysis include the examination of both alternative fee structures and a combination of fees and appropriations as well as the options and trade-offs discussed in our 2011 report. FPS agreed with this recommendation. We have reported that FPS is limited in its ability to assess the effectiveness of its efforts to protect federal facilities. To determine how well it is accomplishing its mission to protect federal facilities, FPS has identified some output measures. These measures include determining whether security countermeasures have been deployed and are fully operational, the amount of time it takes to respond to an incident, and the percentage of FSAs completed on time. As we reported in 2010, while output measures are helpful in assessing performance, outcome measures can provide FPS with broader information on program results, such as the extent to which its decision to move to an inspector-based workforce will enhance security at federal facilities. Outcome measures could also help identify the security gaps that remain at federal facilities and determine what action may be needed to address them. In addition, we reported in 2010 that FPS does not have a reliable data management system that will allow it to accurately track these measures or other important measures such as the number of crimes and other incidents occurring at GSA facilities. Without such a system, it is difficult for FPS to evaluate and improve the effectiveness of its efforts to protect federal employees and facilities, allocate its limited resources, or make informed risk management decisions. For example, weaknesses in one of FPS’s countermeasure tracking systems make it difficult to accurately track the implementation status of recommended countermeasures such as security cameras and x-ray machines. Without this ability, FPS has difficulty determining whether it has mitigated the risk of federal facilities to crime or a terrorist attack. FPS concurred with our recommendations and states that its efforts to address them will be completed in 2012 when its automated information systems are fully implemented. FPS has begun several initiatives that, once fully implemented, should enhance its ability to protect the more than 1 million federal employees and members of the public who visit federal facilities each year. Since 2008, we have made 28 recommendations to help FPS to address its challenges with risk management, strategic human capital planning, oversight of its contract guard workforce, and its fee-based funding structure. DHS and FPS have generally agreed with these recommendations. As of July 2011, as shown in Table 1, FPS was in the process of addressing 21 of them, although none were fully implemented. Of the remaining 7, 5 were recommendations from our May 2011 report, and we would not necessarily expect them to be fully implemented yet. According to FPS officials, the agency has faced difficulty in implementing many of our recommendations because of changes in its leadership, organization, funding, and staffing levels. In addition, FPS officials stated that its progress in implementing our recommendations has been affected by delays in developing several new management systems, such as RAMP. Chairmen Lungren and Bilirakis, Ranking Members Clarke and Richardson, and members of the Subcommittees, this completes my prepared statement. I would be happy to respond to any questions you or other members of the Subcommittees may have at this time. For further information on this testimony, please contact me at (202) 512- 2834 or by e-mail at goldsteinm@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals making key contributions to this testimony include Tammy Conquest, Assistant Director; Colin Fallon; Chelsa Gurkin; Alicia Loucks; Jackie Nowicki, Assistant Director; Justin Reed; and Susan Michal-Smith. Budget Issues: Better Fee Design Would Improve Federal Protective Service’s and Federal Agencies’ Planning and Budgeting for Security. GAO-11-492. Washington, D.C.: May 20, 2011. Homeland Security: Preliminary Observations on the Federal Protective Service’s Workforce Analysis and Planning Efforts. GAO-10-802R. Washington, D.C.: June 14, 2010. Homeland Security: Federal Protective Service’s Use of Contract Guards Requires Reassessment and More Oversight. GAO-10-614T. Washington, D.C.: April 14, 2010. Homeland Security: Federal Protective Service’s Contract Guard Program Requires More Oversight and Reassessment of Use of Contract Guards. GAO-10-341. Washington, D.C.: April 13, 2010. Homeland Security: Ongoing Challenges Impact the Federal Protective Service’s Ability to Protect Federal Facilities. GAO-10-506T. Washington, D.C.: March 16, 2010. Homeland Security: Greater Attention to Key Practices Would Improve the Federal Protective Service’s Approach to Facility Protection. GAO-10-142. Washington, D.C.: October 23, 2009. Homeland Security: Federal Protective Service Has Taken Some Initial Steps to Address Its Challenges, but Vulnerabilities Still Exist. GAO-09-1047T. Washington, D.C.: September 23, 2009. Homeland Security: Federal Protective Service Should Improve Human Capital Planning and Better Communicate with Tenants. GAO-09-749. Washington, D.C.: July 30, 2009. Homeland Security: Preliminary Results Show Federal Protective Service’s Ability to Protect Federal Facilities Is Hampered By Weaknesses in Its Contract Security Guard Program. GAO-09-859T. Washington, D.C.: July 8, 2009. Homeland Security: The Federal Protective Service Faces Several Challenges That Raise Concerns About Protection of Federal Facilities. GAO-08-897T. Washington, D.C.: June 19, 2008. Homeland Security: The Federal Protective Service Faces Several Challenges That Raise Concerns About Protection of Federal Facilities. GAO-08-914T. Washington, D.C.: June 18, 2008. Homeland Security: The Federal Protective Service Faces Several Challenges That Hamper Its Ability to Protect Federal Facilities. GAO-08-683. Washington, D.C.: June 11, 2008. Homeland Security: Preliminary Observations on the Federal Protective Service’s Efforts to Protect Federal Property. GAO-08-476T. Washington, D.C.: February 8, 2008. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
As part of the Department of Homeland Security (DHS), the Federal Protective Service (FPS) is responsible for protecting federal employees and visitors in approximately 9,000 federal facilities owned or leased by the General Services Administration (GSA). FPS has a budget of approximately $1 billion and maintains approximately 1,200 full-time employees and about 13,000 contract security guards that help accomplish the agency's facility protection mission. This testimony is based on past reports and testimonies and discusses challenges FPS faces in carrying out its mission with regard to (1) risk management, (2) strategic human capital planning, (3) oversight of its contract guard program, and (4) ensuring that its fee-based funding structure is the appropriate mechanism for funding the agency. GAO also addresses the extent to which FPS has made progress in responding to these challenges. To perform this work, GAO used its key facility protection practices as criteria, visited FPS regions and selected GSA buildings, reviewed training and certification data for FPS's contract guards, and interviewed officials from DHS, GSA, guard contractors, and guards. FPS continues to face challenges in carrying out its mission. Specifically: (1) The absence of a risk management program hampers FPS's ability to protect federal facilities. For many years, GAO has advocated the importance of a risk management approach. GAO reported in August 2010 that FPS does not use a comprehensive risk management approach that links threats and vulnerabilities to resource requirements. Instead, FPS uses a facility-by-facility approach which assumes that facilities with the same security level have the same risk regardless of their location. Without a risk management approach that identifies threats and vulnerabilities and the resources required to achieve FPS's security goals, as GAO has recommended, there is limited assurance that programs will be prioritized and resources will be allocated to address existing and potential security threats in an efficient and effective manner. (2) FPS has not fully addressed several key human capital issues. FPS continues to operate without a strategic human capital plan to guide its current and future workforce planning efforts, as GAO recommended in 2009. Further, FPS is not able to determine what its optimal staffing levels should be because FPS headquarters does not collect data on its workforce's knowledge, skills, and abilities. FPS has yet to fully ensure that its recent move to an inspector-based workforce does not hinder its ability to protect federal facilities. (3) FPS faces longstanding challenges in managing its contract guard workforce. Weaknesses in FPS's contract guard program hamper its ability to protect federal facilities. GAO reported in 2009 and 2010 that FPS cannot ensure that its contract guards have required training and certifications. FPS is in the process of addressing GAO recommendations. For example, FPS revised its x-ray and magnetometer training for its inspectors and guards. (4) FPS has not reviewed its fee design or determined an appropriate funding mechanism. FPS increased its basic security fee four times in 6 years to try to cover costs, but has not reviewed its fees to develop an informed, deliberate design. FPS's current fee structure has consistently resulted in total collection amounts less than agency costs and continues to be a topic of congressional interest and inquiry. FPS has yet to evaluate whether its fee-based structure or an alternative funding mechanism is most appropriate for funding the agency, as GAO recommended in 2008 and 2011. FPS has made some progress in improving its ability to protect federal facilities. For example, in response to GAO recommendations, FPS is developing the Risk Assessment and Management Program (RAMP), which could enhance its ability to comprehensively assess risk at federal facilities and improve oversight of its contract guard program. DHS and FPS have initiatives in process to address 21 of the 28 recommendations GAO has made related to the challenges above, although none are yet fully implemented. According to FPS officials, this is in part because of changes in the agency's leadership, organization, funding, staffing levels, and delays in developing several new management systems, such as RAMP. DHS and FPS have generally concurred with GAO's past recommendations. DHS and FPS have initiatives in process, for example, to address risk management, strategic human capital planning, and oversight of its contract guard program.
Money laundering is the use or conversion of money gained from illegal activity, such as drug smuggling, as or to money that appears legitimate and whose source cannot be traced to the illegal activity. Law enforcement officials have estimated that between $100 billion and $300 billion in U.S. currency is laundered each year. BSA and its implementing regulations require financial institutions to maintain records and to file with IRS currency transactions reports for certain transactions exceeding $10,000. These reports create a “paper trail” of records that is useful in regulatory, tax, and criminal investigations, such as money laundering cases. In 1985, BSA regulations were amended to include certain casinos, with gross annual gaming revenues (GAGR) over $1 million, under the definition of a financial institution. Prior to BSA’s application to casinos, money laundering activities could occur in casinos in a variety of ways without a mechanism in place to deter and detect it. For example, an individual could purchase gaming chips with large amounts of cash, do little or no gaming, and then redeem the chips for a casino check without any record of the transactions. Under BSA regulations, casinos are required to maintain records and file reports for currency transactions by, through, or to them that exceed $10,000. However, according to Treasury and IRS officials, there is no such requirement for transactions under $10,000. In congressional hearings, Treasury officials have recognized and testified that casinos are primarily cash-based businesses that perform many of the same services as banks for their customers, such as cashing checks and placing money on deposit, and these officials expressed concern about the potential use of casinos as an avenue for moving funds generated by illegal activity. IRS’ Examination Division is responsible for monitoring and enforcing compliance with BSA reporting and recordkeeping requirements for all financial institutions under its jurisdiction, commonly referred to as “non-bank financial institutions.” This monitoring includes conducting periodic compliance reviews at over 100,000 nonbank financial institutions, including casinos. Treasury’s Office of Regulatory Policy and Enforcement, formerly the Office of Financial Enforcement, is responsible for promulgating and providing interpretive guidance on BSA regulations, reviewing violations found by IRS, and recommending assessment of civil penalties, if warranted, against noncomplying institutions. BSA provides the Secretary of the Treasury with authority to prescribe an appropriate exemption from its requirements. Treasury’s exemption regulation allows an exemption to casinos in any state whose regulatory system substantially meets BSA’s reporting and recordkeeping requirements. In 1985, Treasury granted such an exemption from certain BSA requirements to casinos in Nevada. The Memorandum of Agreement between Treasury and Nevada permitted the state to assume regulatory responsibility for currency transaction reporting by its casinos, as well as required the state to enact certain laws and establish certain procedures to implement its regulatory system. As a result of the agreement, Nevada revised the Nevada Gaming Control Act and adopted Nevada Gaming Commission Regulation 6A (hereafter referred to as Regulation 6A), which contains the requirements for currency transaction reporting by Nevada casinos. The agreement also stipulated that, for the exemption to stay in effect, changes to such state regulations require Treasury’s approval and, similarly, that changes in BSA or its regulations must be reflected in the state’s regulations if required by Treasury. IGRA was enacted to provide a statutory basis for the operation of gaming by Indian tribes, as well as to provide a means for the regulation of such activity. IGRA classifies the different forms of Indian gaming—ranging from bingo to more common casino games such as roulette, craps, slot machines, and blackjack—into three classes of Indian gaming. (App. I describes the three classes.) Generally, under IGRA, Indian tribes may establish Class III gaming, such as roulette, craps, slot machines, and blackjack, on Indian lands as long as the proposed gaming is not prohibited in the state. IGRA requires that tribes sign written agreements, or compacts, with the states if the proposed gaming meets the definition of Class III gaming operations (hereafter referred to as tribal casinos). The compacts describe the scope of Indian gaming permitted and define state and tribal authority related to gaming operations. Under IGRA, tribal casinos are subject to the currency reporting requirements of IRC section 6050I. The IRS Examination Division is responsible for ensuring that tribal casinos comply with these requirements. BSA and tribal casinos are to file currency transaction reports with the IRS’ Detroit Computing Center (DCC) for inclusion in a national database, the Currency and Banking Retrieval System (CBRS). Nevada casinos file currency transaction reports with the Nevada Gaming Control Board (NGCB), which subsequently sends the reports on to DCC. IRS and other law enforcement agencies are to use the BSA portion of the database for civil and criminal enforcement and tax purposes. Currency transaction reports filed by BSA and Nevada casinos, as well as the reports filed by tribal casinos under section 6050I, are included in the database. Certain information from the reports filed by BSA and Nevada casinos is accessible to all 50 states for law enforcement purposes and to all federal law enforcement agencies through the Financial Crimes Enforcement Network (FinCEN). Transaction information filed by tribal casinos is generally not accessible to law enforcement because it is reported on transaction forms that record income tax information and thus are currently subject to disclosure restrictions. Our initial objectives were to determine (1) the extent of legalized gaming in the United States, (2) the currency transaction reporting requirements for casinos, (3) the currency transaction reporting requirements for tribal casinos, and (4) the level of enforcement efforts to ensure that casinos are complying with currency transaction reporting requirements. Because changes in reporting requirements were being planned during the time of our review, we added an objective to provide information on the changes in federal regulations and legislation. To determine the extent of legalized gaming in the United States, we reviewed testimony, reports, and articles concerning the gaming industry, including its extent and growth. To determine the currency transaction reporting requirements for casinos, including tribal casinos, we reviewed BSA, the BSA implementing regulations under 31 C.F.R. part 103, IGRA, Nevada’s Regulation 6A, and section 6050I of IRC. We also interviewed officials from NGCB and IRS’ Criminal Investigation and Examination Divisions in Washington, D.C. To determine what efforts have been made to ensure that casinos are complying with currency transaction reporting requirements, we interviewed officials at Treasury’s FinCEN, Office of Regulatory Policy and Enforcement, and IRS’ Criminal Investigation and Examination Divisions. In addition, we interviewed officials from NGCB and IRS officials in Nevada, New Jersey, Louisiana, Mississippi, and Connecticut. We also interviewed casino officials in those states. We reviewed and analyzed IRS management reports and currency transaction reporting data from the CBRS at IRS’ DCC. To determine recent changes in federal regulations and legislation, we reviewed the Money Laundering Suppression Act of 1994 and recent amendments to BSA regulations; in addition, we confirmed that Treasury and Nevada officials continue to have ongoing discussions regarding the differences between BSA and Nevada’s regulations. To familiarize ourselves with how casinos comply with reporting requirements and how the requirements are enforced, we selected areas to visit with large concentrations of casinos. We selected Las Vegas, Nevada, and Atlantic City, New Jersey, and—for variety of types of casinos—riverboat casinos in Louisiana and Mississippi, as well as a tribal casino. For the latter, we chose Foxwoods Resort Casino in Ledyard, Connecticut, the largest tribal casino in the country. In appendix II, we list all of the casinos that we visited for this review. As agreed with the Subcommittee, our focus was on casinos with GAGRs over $1 million. We did not verify the accuracy and completeness of the data we obtained from IRS. We did our work in Washington, D.C., and the locations visited between March 1994 and August 1995 in accordance with generally accepted government auditing standards. We obtained oral comments on a draft of this report from Treasury and IRS. Their comments are discussed in the agency comments section of this report. We received written comments from FinCEN. They are reproduced, along with our responses, in appendix VI. Casino gaming is expanding at a rapid pace, and new casinos continue to open across the country. Although Nevada and New Jersey casinos still generate the most revenue from casino gaming, riverboat casinos and tribal casinos have increased their share of total casino gross annual gaming revenue (GAGR). The expansion of casinos has also increased the amount of money changing hands, or wagered. According to International Gaming and Wagering Business (various issues 1988 through 1995), wagering at all types of casinos totaled about $407 billion in 1994, up from about $117 billion in 1984. In constant dollars, this represents an increase of 152 percent over this period. As the amount of money wagered annually has increased, casinos may have become more vulnerable to individuals who attempt to launder their illegal profits in the fast-paced environment of casino gaming. Although 13 states and Puerto Rico permit games of chance, such as roulette, craps, slot machines, and blackjack, that take place at nontribal casinos, Nevada and New Jersey generate the largest casino revenues. In 1994, Nevada and New Jersey reported combined casino GAGRs of about $10.2 billion; this represented approximately 56 percent of the total nationally reported casino GAGRs—$18.4 billion—for that year. Nevada has had legalized gaming since 1931 and, as of June 1994, had over 400 casinos, of which about 220 generated GAGRs of over $1 million each. Although casinos operate in other Nevada cities, including Reno, Lake Tahoe, and Laughlin, approximately 120 of these 220 casinos are located in Las Vegas. Reported GAGRs for all Nevada casinos (excluding tribal casinos) were approximately $6.8 billion in 1994. Appendix III indicates the prevalence of legalized gaming throughout the country and in Puerto Rico. Since 1976, gaming has been legal in New Jersey. Twelve large casinos, the only casinos in New Jersey, operate along the boardwalk and in the marina area of Atlantic City. All 12 generated GAGR in excess of $1 million; their total reported GAGRs for 1994 were about $3.4 billion. Appendix IV illustrates total GAGRs by gaming activities and for casinos in 1994. The growth of riverboat casino gaming has been dramatic. Prior to 1991, there were no riverboat casinos operating in the United States. Since then, close to 60 riverboat casinos have opened, but several have relocated due to a high level of competition in some areas. Initially, riverboat casinos were located primarily along the Mississippi River in Iowa and Illinois, but they have also expanded to other locations, such as Tunica, Mississippi (near Memphis, Tennessee), and New Orleans. Figure 1 shows a riverboat casino. As of September 1994, 57 riverboats operated in five states: Illinois, Iowa, Mississippi, Missouri, and Louisiana. Indiana has passed legislation allowing riverboat casinos, but none were operating at the time of our review. Several other state legislatures have considered legislative initiatives to legalize riverboat gaming as a means of bringing new revenue into their states. Between 1992 and 1994, reported riverboat casino GAGRs increased from $0.4 billion to about $3.3 billion, thereby capturing about 18 percent of the total casino revenue. The growth of Indian gaming, which includes casino and bingo operations, has also been rapid. Ten years ago, Indian gaming was practically nonexistent. However, as of March 1995, we estimated that there were 237 Indian gaming operations, including 119 tribal casinos, in 29 states. Between 1992 and 1994, reported tribal casino GAGRs grew from about $1.2 billion to about $3.0 billion, thereby capturing about 16 percent of the total casino revenue. As figure 2 illustrates, Indian gaming operations, including tribal casinos, are currently located throughout the United States. Indian gaming may generate large amounts of revenue for some of the tribes that own these operations. For example, according to a report by the California attorney general’s office, in 1993 three tribal casinos near San Diego generated a total of over $200 million in revenues. Foxwoods Resort Casino in Ledyard, Connecticut—owned by the Mashantucket Pequot Tribe—reported revenue in excess of $40 million per month in 1994. Indian gaming operations may also generate additional income for the states in which they are located. For example, in 1994, the Pequot tribe paid the State of Connecticut about $136 million under a compact governing the operation of the casino in the state. Figure 3 shows the largest tribal casino in the United States. The amounts of money wagered in all forms of legalized gaming have increased substantially along with the expansion of legalized gaming. According to International Gaming and Wagering Business (various issues 1988 through 1995), between 1984 and 1994, the total annual amount wagered in all forms of legalized gaming jumped from approximately $147 billion to approximately $482 billion. In constant dollars, this represents an increase of 137 percent over this period of time. Casino gaming and Indian gaming operations together account for the largest amounts of money wagered in legalized gaming activities. About $368 billion, or 76 percent of the $482 billion wagered in 1994, was wagered in nontribal casinos; Indian gaming operations, including tribal casinos, accounted for about $41 billion, or 9 percent of the total. Figure 4 illustrates the total dollar amounts wagered, by gaming activity, in 1994. The shaded areas show casino activity. According to International Gaming and Wagering Business (various issues 1988 through 1995), wagering in nontribal casinos increased from about $117 billion in 1984 to about $368 billion in 1994. Indian gaming increased from virtually none to about $41 billion during the same period. Figure 5 illustrates the increase in the total dollar amounts wagered in casino gaming and Indian gaming between 1984 and 1994. According to IRS’ Criminal Investigation Division, casinos are particularly vulnerable to the initial stage of money laundering, called the “placement” stage, in which money from illegal activities is introduced into the financial system through banks or cash-intensive businesses. Casinos are also vulnerable to money launderers because of the fast-paced nature of the games and because casinos can provide their customers with many financial services nearly identical to those generally provided by banks. Figure 6 illustrates the dollar amounts wagered in casinos in 1994. Currency transaction regulations and reporting requirements provide the primary deterrent to, and means of detection of, money laundering in casinos. However, not all casinos are subject to the same regulations and reporting requirements. Because the regulations and reporting requirements for tribal casinos and Nevada casinos differ from BSA requirements, information reported to IRS differs. These differences may cause problems for law enforcement officers looking for a consistent paper trail of records with which to trace all gaming activity of customers engaged in large cash transactions, as well as to help identify potential money laundering activities. Generally, BSA currency transaction reporting requirements have applied to all casinos with GAGRs over $1 million, except those in Nevada and tribal casinos. Nevada casinos operate under State Regulation 6A, and tribal casinos under IGRA have been subject to section 6050I of the Internal Revenue Code (IRC) for cash-intensive businesses. Table 1 provides a comparison of the three sets of requirements and the corresponding reports that must be filed with IRS. Information reported to IRS on the nature of the cash transaction and the identity of the customer varies according to the type of casino involved. For example, Nevada casinos are not required to report any information on customers who win over $10,000 if a casino employee verifies that the winnings are the result of gaming at the casino. On the other hand, under BSA regulations, casinos are required to report all cash transactions over $10,000, including gaming winnings. Tribal casinos currently are required to report only those cash transactions involving cash receipts by the casino exceeding $10,000. Table 2 summarizes certain reporting requirements under BSA, Nevada’s Regulation 6A, and IRC. Table 2 also includes certain cash transactions that are prohibited by Nevada’s Regulation 6A because they could facilitate money laundering. BSA reporting requirements apply to all currency transactions over $10,000 that take place in casinos, except those taking place in Nevada and tribal casinos. These requirements include reporting all cash coming into the casino, such as chip purchases and money placed on deposit for safekeeping, and all cash going out of the casino, such as chip redemptions and cash payouts for slot machine winnings. IRS officials in the New Orleans district told us that the BSA reporting and recordkeeping system is a deterrent to money laundering because concealment of transactions would require the involvement of more than one casino employee. Employees in different areas of the casino, including those in the cage areas and on the gaming floor, track customer gaming activity and maintain logs and records needed to prepare currency transaction reports. According to New Orleans IRS officials, the BSA system makes it more difficult for a customer to circumvent currency transaction reporting requirements without the cooperation of several casino employees. Treasury and IRS headquarters officials told us that BSA is also a deterrent because customers know that currency transactions will be reported to IRS. BSA reporting regulations require that certain casinos with GAGRs over $1 million report all currency transactions over $10,000 to IRS. BSA reporting regulations also require that multiple currency transactions be reported to IRS as a single transaction if the casino has knowledge that the transactions (1) were conducted by, or on behalf of, the same individual and (2) total over $10,000 in a gaming day. Such currency transactions are to be reported on Currency Transaction Report by Casinos (CTRC) Form 8362. CTRCs include specific information about the type of transaction as well as identifying information on individuals conducting the transactions, such as their Social Security numbers. (App. V includes an example of a CTRC.) In December 1994, certain changes to BSA reporting and recordkeeping requirements for casinos became effective. Among other things, the regulations now require that every casino subject to BSA establish a BSA compliance program that includes developing internal controls to ensure BSA compliance, conducting independent testing (auditing) for BSA compliance, training casino personnel in BSA compliance, designating personnel responsible for day-to-day compliance with BSA currency transaction reporting requirements, and using existing automated data processing systems to aid in ensuring compliance. Casinos must also obtain and verify additional identifying information about customers who wish to deposit funds, open an account, or establish a line of credit. This will provide casinos with information on regular customers in line with Treasury’s intention to require financial institutions to establish “know-your-customer” programs to encourage casinos to become familiar with the practices of their regular customers and to report out-of-the-ordinary, or suspicious, transactions to IRS. It will also encourage casinos to take a more active role in ensuring their own compliance with BSA requirements. Nevada casinos are required to report cash coming into the casino and cash going out, except verified winnings, on a state Currency Transaction Report (CTR). Winnings are reported on Currency Transaction Incidence Reports (CTIR), which do not include customer identification for cash payouts greater than $10,000 on wagers or redemption of chips that exceed $10,000, if the chips are from verified winnings. For both of these transactions, a casino employee must verify that customer winnings are the result of gaming at the casino. Casino officials believe this employee verification is important because CTIRs distinguish casino payouts in the form of winnings—a legitimate gaming activity—from all other currency transactions conducted in the casino that could be avenues for money laundering. Both CTRs and CTIRs from Nevada casinos are forwarded by the Nevada Gaming Control Board (NGCB) to IRS’ Detroit Computing Center (DCC). According to an official at DCC, information from Nevada’s CTRs is entered into the Currency and Banking Retrieval System (CBRS), but information from CTIRs is not included in the database; CTIRs are filed separately. IRS and Financial Crimes Enforcement Network (FinCEN) officials reported that the CTIR information is “useless” to IRS because the forms, which do not include customer names or any customer identification, provide an incomplete picture of a currency transaction. (App. V contains examples of Nevada’s CTR and CTIR forms.) Nevada regulations generally do not require reporting aggregation related to gaming in different areas of the casino. Instead, Nevada casinos are required to aggregate transactions that take place in the same gaming area of the casino—for example, multiple cash purchases at blackjack tables—but are not required to aggregate transactions occurring in different gaming areas of the casino—for example, chip purchases on blackjack and roulette tables by the same player. IRS and FinCEN officials believe that, to the extent the casino has systems in place with which to track a customer’s multiple transactions, or is otherwise aware of a customer’s currency activity, it should report transactions over $10,000. This would provide a complete record of all reportable gaming activity by casino patrons. Since 1993, Treasury officials have had ongoing discussions with Nevada casino officials and regulators about possible changes to Nevada’s Regulation 6A aimed at making it more closely parallel BSA recordkeeping and reporting requirements. Although Treasury officials have had a continuing dialogue with Nevada officials, no details were available to us as of September 1995. Nevada regulations prohibit certain cash transactions that may lend themselves to money laundering. BSA provisions have no such prohibitions. Specifically, Nevada prohibits casinos from exchanging cash for cash in an amount greater than $2,500; issuing a negotiable instrument, such as a casino check, in exchange for cash in an amount greater than $2,500; and effecting any transfer of funds, such as a wire transfer, in exchange for cash in an amount greater than $2,500. Consequently, Nevada regulations prohibit casino patrons from simply exchanging their cash for cash of a different (e.g., larger) denomination, or for another monetary instrument. For example, small denomination bills from illicit drug sales cannot be converted to large bills in transactions exceeding $2,500. Officials at the NGCB and casino officials we interviewed told us that they strongly believe that the prohibited transactions specifically prevent and act as a deterrent to money laundering, even though they have no evidence to measure the effectiveness of the prohibitions. According to testimony by an IRS official in 1993, money laundering has occurred in casinos in a variety of ways, including the exchanging of large amounts of cash for casino checks and small denomination bills for larger bills. These types of transactions involving amounts over $2,500 are prohibited in Nevada under Regulation 6A. IRS officials in the districts we visited had different opinions about prohibited transactions. IRS officials from the Criminal Investigation and Examination Divisions in the New Orleans District said that prohibiting certain transactions, as Nevada does, would be a deterrent to money launderers. The IRS gaming industry specialist in Nevada told us that prohibiting certain transactions, as Regulation 6A does, is a strong deterrent to money laundering. Further, an IRS oversight review of Nevada casinos by the Las Vegas District in February 1992 noted that prohibiting certain transactions is one of the strengths of the Nevada system. Conversely, officials from the IRS Criminal Investigation and Examination Divisions in Newark said that prohibiting certain transactions does not provide any information on customers attempting these transactions, nor does it provide a paper trail of records for law enforcement to follow. Further, they said that prohibiting certain transactions from occurring in BSA casinos would require undercover efforts on the part of IRS to ensure that casinos complied with the regulations. Under IGRA, tribal casinos have been subject to limited reporting requirements under section 6050I of IRC that apply only to cash receipts and include no recordkeeping requirements. Tribal casinos report such cash receipts over $10,000 on a Report of Cash Payments Over $10,000 Received in a Trade or Business, IRS Form 8300. In addition, IRC regulations for section 6050I provide that initial payments not exceeding $10,000 must be aggregated with subsequent payments made within 1 year of the initial payment until the aggregate amount exceeds $10,000. Form 8300 information is included in the CBRS database. However, because it contains income tax information, this form is generally unavailable to law enforcement agencies conducting money laundering or other criminal investigations. (App. V contains an example of IRS Form 8300.) In comparison, BSA mandates comprehensive currency transaction reporting for all transactions over $10,000 and requires a detailed recordkeeping system. The Money Laundering Suppression Act of 1994 expanded the definition of a “financial institution” subject to BSA reporting requirements to include certain tribal casinos. More specifically, under section 409 of the act, entitled “Uniform Federal Regulation of Casinos,” the term “financial institution” was expanded to include both those casinos currently subject to BSA reporting requirements and Indian gaming operations, such as tribal casinos, with GAGRs over $1 million. IRS Examination Division officials told us that this change was meant to provide more consistent reporting by tribal casinos, as well as a more complete record of customer transactions. According to FinCEN officials, Treasury’s Office of Regulatory Policy and Enforcement is responsible for drafting, implementing, and providing interpretative guidance on BSA regulations. This involves publishing the regulations in the Federal Register and considering comments before the new regulations become effective. On August 3, 1995, Treasury published proposed amendments to BSA implementing regulations that would subject certain tribal casinos to BSA reporting and recordkeeping requirements. This change is intended, in part, to clarify the currency reporting obligations of tribal casinos and to bring certain tribal casinos under Treasury’s anti-money-laundering controls. Until these proposed amendments become effective, tribal casinos will remain subject to the more limited reporting requirements under section 6050I of IRC. The proposed regulation permits written comments on or before November 1, 1995, with the effective date being 90 days after publication of the final rule. IRS’ Examination Division is responsible for ensuring that casinos comply with BSA reporting and recordkeeping requirements. IRS is also responsible for ensuring that tribal casinos comply with the section 6050I reporting requirements. The NGCB Audit Division is responsible for ensuring that Nevada casinos comply with Regulation 6A. Regulatory efforts to determine compliance with currency transaction reporting requirements have varied for different types of casino. IRS has performed some compliance reviews at BSA casinos, as has NGCB at Nevada casinos. Some transaction reporting violations were found by both IRS and NGCB, and fines have been assessed at Atlantic City and Nevada casinos. IRS has also made efforts to inform and educate the management of newer casinos, particularly riverboat and tribal casinos, about transaction reporting requirements. However, IRS compliance reviews at riverboat casinos had only recently begun at the locations we visited, and consequently results were not available at the time of our review. Moreover, as of August 1995, IRS had not completed any compliance reviews of tribal casinos. Casino compliance reviews are complex. According to the IRS’ 1994 BSA Compliance Check Handbook, compliance reviews of BSA casinos consist of interviews with casino management and employees, reviews of the casino’s reporting and recordkeeping systems—which may be computerized—and analyses and matches of casino transaction records with casino filings in the CBRS database in Detroit. IRS’ Examination Division personnel who conduct compliance reviews require specialized training and knowledge of casino operations and recordkeeping systems. Due to the rapid growth of the casino industry, IRS has been training Examination Division personnel, including revenue agents, tax auditors and compliance officers, to perform casino compliance reviews at casinos subject to BSA requirements. IRS policy is to use computer auditing techniques whenever possible. IRS has also conducted several training seminars, including a seminar in November 1994 on conducting compliance reviews at riverboat casinos. In addition to casino compliance reviews, Examination personnel are responsible, as previously mentioned, for BSA compliance reviews of more than 100,000 nonbank financial institutions, as well as for both individual and business tax compliance audits. They are also responsible for section 6050I compliance reviews on all trades and businesses. The IRS Examination Division, like much of the federal government, is faced with declining resources. Over the past 6 years, Examination resources have declined from the 1989 level of 31,315, to 28,788 in 1995—a decrease of over 2,500 during that period. We recognize that, as resources decline, there are fewer and fewer Examination personnel to conduct IRS compliance reviews, including BSA casino reviews. IRS Examination Division officials told us that each of its current 63 districts has a coordinator responsible for (1) identifying nonbank financial institutions, (2) selecting/targeting for review institutions with a high potential for noncompliance, and (3) scheduling compliance reviews. According to the IRS BSA Compliance Check Handbook, when selecting institutions for a compliance review, the coordinator is to consider achieving a balanced coverage of the different types of nonbank financial institutions, including casinos. In addition, the focus should be on institutions with a high volume of cash transactions or with abnormal cash activity. The IRS Examination Division prepares a comprehensive currency and banking quarterly report that includes the total number (or inventory) of casinos subject to BSA requirements and the number of compliance reviews completed. In 1990, a Senate Appropriations Committee report required that IRS submit this information to the Committee so that it could track IRS compliance efforts at nonbank financial institutions, including casinos. In December 1991, IRS reported an inventory of 146 BSA casinos with GAGRs in excess of $1 million each. In December 1994, the number of BSA casinos reported had increased to 337. Meanwhile, between October 1991 and December 1994, IRS had completed 24 BSA compliance reviews at casinos. Between 1986 and 1990, IRS completed compliance reviews at 10 of the 12 Atlantic City casinos, identifying in the process numerous currency transaction reporting and recordkeeping violations. As a result, in 1993 Treasury assessed civil penalties of about $2.5 million against the 10 casinos. Among other violations, IRS found that every casino examined had failed to file some required reports on currency transactions and, in addition, had not expended sufficient resources and conducted enough training to comply fully with the BSA requirements. According to IRS, compliance reviews in Atlantic City were accomplished through interviews, on-site inspections of casino records, and computer matching of casino records with CTRCs filed at DCC. In addition, according to IRS, casino records that did not match were traced to original casino documents to determine whether transactions over $10,000 were reported to DCC and whether they were correctly reported. Officials from IRS’ Examination Division told us that the Newark District recently began compliance reviews at the two Atlantic City casinos that were not reviewed earlier. Newark District officials reported that they plan to follow a 3-year cycle for compliance reviews at the 12 Atlantic City casinos—that is, complete approximately 4 per year. IRS examiners had just begun to perform compliance reviews at riverboat casinos at the time of our review. At the time of our visit, six compliance reviews were in progress in Mississippi. In December 1994, the Jackson District reported that it planned to conduct compliance reviews at casinos in the order that the casinos opened. Casinos under review in December 1994 opened for business in 1992; the 1995 plan calls for review of those casinos opened in 1993. As of December 1994, there were 34 casinos operating in Mississippi. IRS officials in the New Orleans District said that they had not conducted any compliance reviews at Louisiana riverboat casinos. IRS was working with the Louisiana State Police to ensure that casino personnel were informed about BSA reporting requirements. Agents from the New Orleans District said that they had also performed some educational visits to ensure that casino personnel understood BSA reporting requirements. IRS Examination Division and Criminal Investigation Division officials in New Orleans stated that they work together to ensure that casinos comply with BSA requirements, and that any potential money laundering would be investigated. In addition to compliance efforts at the locations we visited—in Atlantic City, Louisiana, and Mississippi—IRS has also taken steps both to educate casino officials in other states about BSA reporting requirements and to ensure that the officials understand their responsibilities under BSA. Officials from IRS’ Money Laundering Team said that the IRS strategy for compliance for all nonbank financial institutions is “three Es”—educate, enhance, and enforce. IRS has also held training conferences for computer audit specialists, agents, examiners, and compliance officers to teach them the complexities of conducting compliance reviews at casinos. Further, IRS has detailed agents to work with state casino gaming commissions in Illinois, Indiana, and Missouri to assist in conducting casino background investigations and to help ensure that casinos are complying with BSA. As of March 1995, the IRS Examination Division had not completed any reviews of tribal casinos, although under IGRA they have been subject to the currency transaction reporting requirements of IRC section 6050I since 1988. Officials from the Money Laundering Team in the Examination Division said that, before undertaking reviews of tribal casinos, they believed it was necessary to establish procedures and appropriate protocol for conducting reviews on Indian lands. In January 1993, IRS announced a delay in planned reviews of tribal casinos. This temporary delay ended in January 1994. The IRS national office directive specified that the reason for the delay was that IRS “did not have a consistent and systematic compliance strategy” for conducting reviews on Indian lands. According to an IRS official, a strategy could not be developed until a resolution was reached concerning an “inconsistency” in the act. FinCEN noted that this situation creates a reporting ambiguity that may have confused some Indian gaming operators about their obligations to report such large currency transactions. While no clear strategy for conducting compliance reviews on Indian lands has been developed, on August 3, 1995, Treasury published a proposed regulation to bring certain tribal casinos under Treasury’s anti-money-laundering controls. According to FinCEN, this change is intended, in part, to clarify the currency reporting obligations of tribal casinos. Officials from IRS’ Money Laundering Team at the national office said they had been developing an Indian Assistance Handbook that should “help to create consistency in IRS district office procedures for conducting compliance reviews” and foster cooperative relationships with the tribes involved in gaming activities. According to the team manager, several agencies, including the Bureau of Indian Affairs and representatives from the National Indian Gaming Commission, had worked with IRS to develop the handbook. The handbook is to include protocol for contacting tribal officials, as well as clarify issues involving access to casino records for tax and compliance reviews. However, as of August 1995, the team manager did not know when the handbook would be published. The NGCB Audit Division conducts three different types of compliance reviews: interim audits, to be conducted annually, are to include the testing of all currency transaction reporting procedures and a limited document review; full audits, to be performed every 2-3 years, are to include extensive document review to test all procedures for compliance with reporting requirements; and covert checks, to be conducted periodically, are similar to undercover operations and are to be used to test casino compliance with Nevada’s currency transaction requirements. According to a 1992 IRS oversight review, one of the strengths of the Nevada system is that NGCB is to conduct either an interim audit or a full audit at all casinos under Regulation 6A at least once a year. The covert checks are conducted on an unscheduled basis without advance warning to the casinos. Typically, a NGCB agent enters a casino and attempts to test compliance with currency transaction reporting requirements, or tries to conduct a prohibited transaction. For example: an agent might try to purchase chips in an amount over $10,000 as a test to determine whether casino employees properly record the currency transaction on a CTR; or an agent might try to exchange $5,000 in cash for another $5,000 cash from the casino, or $5,000 in cash for a casino check. (Both are prohibited transactions in amounts over $2,500.) As of November 1994, NGCB had found currency transaction reporting and recordkeeping violations at 24 casinos and had fined 22 casinos about $1.8 million. Legalized gaming is expanding rapidly across the United States. Casino gaming is among the fastest growing forms of legalized gaming, and new casinos continue to open around the country. Two areas of notable growth are riverboat casino gaming and Indian gaming. Along with this growth has come a large increase in the amount of cash wagered at all casinos, which totaled about $407 billion in 1994. With this much cash changing hands, casinos may be particularly vulnerable to money laundering in the form of money from illegal activities being placed into legal gaming transactions. Recent BSA regulations requiring casinos to establish anti-money-laundering compliance programs, together with the implementation of certain provisions in the Money Laundering Suppression Act of 1994 with respect to certain tribal casinos, should help to deter or detect potential money laundering. These actions, coupled with possible changes to Nevada’s regulations, should bring greater consistency and uniformity to transaction reporting for casinos and improve the information available to law enforcement. Measures that deter money laundering before it happens, such as Nevada’s prohibited transactions, may also help to combat money laundering in casinos. Although there is no data to measure effectiveness, Nevada gaming officials strongly believe in the preventative aspects of prohibiting transactions that may lend themselves to money laundering. The current federal strategy for deterring and detecting money laundering in casinos involves Treasury, which promulgates BSA reporting and recordkeeping regulations, and IRS, which performs compliance reviews. IRS has detailed agents to several state gaming commissions and taken steps to assure that casinos are complying with BSA currency transaction reporting requirements. IRS’ Examination Division has done some monitoring of Nevada’s compliance program and has established a cycle for reviewing casino compliance in Atlantic City. Until recently, almost all of the casinos in the country were in these two locations. In addition, FinCEN is working to develop a partnership with the gaming industry and it is their intention to encourage casinos to know their customers and identify suspicious transactions. With its very limited resources, IRS’ Examination Division is responsible for compliance reviews at BSA casinos and over 100,000 other nonbank financial institutions, in addition to the massive job of ensuring compliance with our federal tax laws through the audit of individual and business tax returns. It seems likely, given the competing demands on resources, that IRS compliance review coverage for casinos will be limited. The new BSA regulations that require casinos to take a more active role in ensuring their own compliance with BSA, as well as other money laundering prevention strategies such as Nevada’s prohibited transactions, could be positive steps toward compliance given the limited IRS resources for compliance reviews. We recommend that the Secretary of the Treasury consider the costs and benefits of an amendment to BSA to allow for the prohibition, as Nevada does, of certain cash transactions in casinos that may lend themselves to money laundering. On September 22, 1995, we obtained oral comments separately from Treasury and IRS officials on a draft of this report. At Treasury, we met with FinCEN representatives, including the Associate Director of the Office of Regulatory Policy and Enforcement. We also met with IRS officials, including the National Director for Compliance Specialization. In addition, FinCEN sent us written comments, which are reproduced in appendix VI. Both FinCEN and IRS provided clarifications and technical corrections that we have incorporated where appropriate. In its written comments, FinCEN said that, in general, the report is an informative and accurate account of the growth of casino gaming in America and of the potential threat this expansion poses for increased money laundering. However, FinCEN disagreed for several reasons with our recommendation that Treasury consider the costs and benefits of an amendment to BSA to allow for the prohibition, as Nevada does, of certain cash transactions in casinos that may lend themselves to money laundering. FinCEN’s reasons for disagreeing included concern that our work had not demonstrated that the prohibition of certain cash transactions would in fact deter money laundering and that additional prohibitions would increase the reporting burden on casinos. Both FinCEN and IRS noted that, even if certain cash transactions were prohibited in all BSA casinos, patrons who wished to launder money at a casino could circumvent the prohibitions by finding other ways to launder money there. Our objectives for this review were to provide descriptive information on the extent of casino gaming, related reporting requirements, and enforcement efforts. As we did our work, we became aware of the issue of IRS’ declining resources versus the growth in casino gaming and the related potential for money laundering. While our work was not designed to develop specific solutions, we saw the need for Treasury to explore the feasibility of less resource intensive ways to deter money laundering. That concept embodies the intent of our recommendation. It seemed reasonable to us that due consideration should be given to trying to identify some less resource intensive options, including the possibility, by prohibiting certain transactions, of making the laundering process more difficult and enhancing casinos’ ability to self-regulate the issue, while simultaneously relieving some of the pressure on IRS resources. Given the federal downsizing environment, accompanied by the growth in casino gaming, we continue to believe that the identification of additional, less resource intensive ways to deter money laundering would be an appropriate step for Treasury. As agreed with the Subcommittee, we plan no further distribution of this report until 30 days from its issue date. At that time, we will send copies to the Secretary of the Treasury, the Commissioner of Internal Revenue, and other interested parties and make copies available to others upon request. Appendix VII lists the major contributors to this report. If you need additional information on the contents of this report, please contact me on (202) 512-8787. 1. Social games played solely for prizes of minimal value. 2.Traditional forms of Indian gaming played in connection with tribal ceremonies or celebrations. 1. Bingo or lotto (regardless of whether electronic, computer, or other technological aids are used) played for prizes. 2.Pull-tabs, punch boards, tip jars, instant bingo (if played in same location as bingo) and other games similar to bingo. 3.Nonhouse-banking card games that state law authorizes or does not prohibit and that are played legally anywhere in the state. 1. All forms of gaming that are not Class I or II gaming and any house banking games.2.Card games such as baccarat, blackjack (21), Pai Gow, etc. 3.Casino games such as roulette, craps, keno, etc. 4.Slot machines and electronic or electro-mechanical facsimiles of any game of chance, such as video poker, video blackjack, etc. 5.Sports betting and pari-mutuel wagering, including horse racing, dog racing, Jai Alai, etc. 6.Lotteries. State Alabama Alaska Arizona Arkansas California Colorado Connecticut Delaware District of Columbia Florida Georgia Hawaii Idaho Illinois Indiana Iowa Kansas Kentucky Louisiana Maine Maryland Massachusetts Michigan Minnesota Mississippi Missouri Montana Nebraska Nevada New Hampshire New Jersey New Mexico New York North Carolina North Dakota Ohio Oklahoma Oregon Pennsylvania Puerto Rico Rhode Island South Carolina South Dakota Tennessee Texas Utah Vermont Virginia Washington West Virginia Wisconsin Wyoming Total Indian gaming includes tribal casino and bingo operations (data as of March 1995). Other includes legal bookmaking (sports and horse) and charitable games. Lotteries $14.1 (35.4%) Pari-mutuels $3.7 (9.1%) Other $1.6 (3.9%) Indian gaming $3.4 (8.6%) Charitable bingo $1.0 (2.6%) Card room gaming $0.7 (1.8%) Casino gaming $15.4 (38.5%) N = $39.9 billion in gaming revenues. Other includes legal bookmaking and charitable games. Casino gaming includes riverboats. Nevada $6.8 (37.0%) Other $1.9 (10.5%) Tribal $3.0 (16.2%) Riverboat $3.3 (17.7%) New Jersey $3.4 (18.6%) N = $18.3 billion in gaming revenues for casinos. Tribal casinos do not include bingo operations. The following are GAO’s comments on the Financial Crimes Enforcement Network’s letter dated October 24, 1995. 1. FinCEN stated that our recommendation is vague and ambiguous as to what specific transactions should be considered. Further, they said that the report does not provide any basis for determining that prohibiting certain transactions does in fact deter money laundering to any appreciable extent. Our descriptive work relating to the growth of casinos, currency transaction reporting requirements, and related enforcement efforts was not intended to delineate the costs and benefits of specific prohibitions against money laundering. However, in analyzing our descriptive information, especially in relation to the level of enforcement of anti-money-laundering provisions of BSA and the likelihood that IRS enforcement resources will remain limited, the need for less resource intensive means to deter money laundering seemed evident. We believe that a study to identify means to deter money laundering in casinos, such as by prohibiting certain transactions, would be an appropriate step for Treasury to take as part of an effort to control money laundering while expending fewer federal resources. 2. Treasury notes that, in addition to the prohibition of certain cash transactions in Nevada that we cited, other states also have prohibited transactions, and that we do not cite which of these Treasury should evaluate. Our work did not include an enumeration of all transactions that are prohibited by all states, nor are we suggesting that Treasury undertake such an effort. However, if Treasury is aware of other prohibited transactions that seem likely to inhibit money laundering, they too should be considered. 3. FinCEN said that it is Treasury’s position that the most effective means of combating money laundering is to determine which cash transactions should be recorded or reported, and then to work with the industry to ensure that suspicious activity is detected and reported. We agree that these are very useful strategies. However, this should not rule out adding an additional weapon to the arsenal against money laundering. To the extent that transactions that lend themselves to money laundering could be prohibited, the regulation process could be even more effective. Given the limited examination resources of IRS, follow-up on all recorded transactions seems unlikely. Accordingly, we believe that an action that could augment current enforcement efforts is worthy of consideration. 4. FinCEN stated that Treasury also believes that, given the diversity of gaming operations among states, it should be up to each state to determine the appropriateness of prohibiting certain transactions. We did not suggest that states could not or should not have their own regulations or prohibitions. However, if Nevada or any other state has a regulation or prohibition that could potentially aid the wider fight against money laundering, we believe that it should be considered for wider application. Geoffrey R. Hamilton, Senior Attorney The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO examined: (1) the extent of legalized gaming in the United States; (2) currency transaction reporting requirements for casinos; (3) whether the same transaction reporting requirements apply to tribal casinos; and (4) the Internal Revenue Service's (IRS) efforts to ensure that casinos are complying with currency transaction reporting requirements. GAO found that: (1) 48 states permit some form of legalized gaming, including riverboat casino gaming and Indian gaming; (2) the amount of cash wagered annually in casinos has grown from $117 billion in 1984 to $407 billion in 1994; (3) the Bank Secrecy Act (BSA) requires casinos to report currency transactions over $10,000, obtain additional identifying information about customers opening a line of credit, and develop BSA compliance programs that meet certain requirements; (4) although Nevada casinos report customers that purchase chips in cash amounts over $10,000, they do not report customer identification information on verified winnings over $10,000 or cash exchanges involving small denomination bills over $2,500; (5) tribal casinos are not subject to BSA, but they must report currency transactions in accordance with the Internal Revenue Code (IRC) provision regarding cash received in a trade or business; (6) IRS has made efforts to educate tribal casino officials on IRC reporting requirements to ensure that they are complying with federal regulations; (7) IRS needs to use its enforcement resources to complete compliance reviews of other nonbank financial institutions and to ensure that individuals and businesses are complying with tax laws; and (8) new BSA regulations will relieve some of the pressure on IRS by requiring casinos to take a more active role in ensuring their own compliance with BSA.
Plum Island is a federally owned 840-acre island off the northeastern tip of Long Island, New York. Scientists working at the facility are responsible for protecting U.S. livestock against foreign animal diseases that could be accidentally or deliberately introduced into the United States. Animal health officials define an exotic or foreign animal disease as an important transmissible livestock or poultry disease believed to be absent from the United States and its territories that has the potential to create a significant health or economic impact. Plum Island’s scientists identify the pathogens that cause foreign animal diseases and work to develop vaccines to protect U.S. livestock. The primary research and diagnostic focus at Plum Island is foreign or exotic diseases that could affect livestock, including cattle, swine, and sheep. In addition to FMD and classical swine fever, other types of livestock diseases that have been studied at Plum Island include African swine fever, rinderpest, and various pox viruses, such as sheep and goat pox. Appendix III provides more extensive information on animal diseases of concern mentioned in this report. Some of the pathogens maintained at Plum Island are highly contagious; therefore, research on these pathogens is conducted in a biocontainment area that has special safety features designed to contain the pathogens. If accidentally released, these pathogens could cause catastrophic economic losses in the agricultural sector. The biocontainment area includes 40 rooms for livestock and is the only place in the United States that is equipped to permit the study of certain contagious foreign animal diseases in large mammalian animals. USDA uses this biocontainment area for basic research, diagnostic work, and for clinical training of veterinarians in the recognition of foreign animal diseases. These veterinarians would serve as animal health first responders in the event of an emergency. The North American Foot-and-Mouth Disease Vaccine Bank is also located on Plum Island. USDA had owned and operated Plum Island for nearly 50 years when, in June 2003, the island and its assets and liabilities were transferred to DHS. Plum Island is now part of a broader joint strategy developed by DHS and USDA to protect against the intentional or accidental introduction of foreign animal diseases. Under the direction of the DHS’s Science and Technology Directorate (S&T), the strategy for protecting livestock also includes work at two of DHS’s Centers of Excellence, known as the National Center for Food Protection and Defense and the National Center for Foreign Animal and Zoonotic Disease Defense, as well as other centers within the DHS homeland security biodefense complex. These include the National Biodefense Analysis and Countermeasures Center and the Lawrence Livermore National Laboratory. The strategy calls for building on the strengths of each agency’s assets to develop comprehensive preparedness and response capabilities. (See fig. 1.) According to the strategy, DHS and USDA now work together to address national biodefense issues and carry out the mission of the Plum Island Animal Disease Center as follows: DHS is responsible for coordinating the overall national effort to enhance the protection of agriculture, which the President has defined as a critical infrastructure sector. At Plum Island, DHS’s Science and Technology Directorate is working to advance the development of vaccines and disease prophylactics based on ARS’s basic research. Also, DHS has established a bioforensics laboratory at Plum Island and is working to conduct forensic analysis of evidence from suspected biocrimes and terrorism involving a foreign animal disease attack. USDA/ARS scientists at Plum Island are responsible for basic research on foreign livestock diseases and for early discovery of countermeasures, such as evaluating countermeasures for rapid induction of immunity in livestock. USDA/APHIS scientists are responsible for diagnosing livestock diseases. Also, APHIS conducts diagnostic training sessions several times a year to give veterinary health professionals the opportunity to study the clinical signs of animal diseases found in other countries, such as FMD. Currently, in addition to visiting scientists and fellows, there are approximately 70 federal research scientists, veterinarians, microbiologists, laboratory technicians, and support staff working at Plum Island. DHS and USDA’s combined annual operating funds at Plum Island, based on fiscal year 2005 allocations and other funds, is about $60 million—USDA’s funding is about $8 million, and DHS’s is about $51 million (see fig. 2). Prior to the transfer of Plum Island to DHS, ARS and APHIS shared responsibility for operating costs, although ARS had primary responsibility for the facility. According to agency officials, both agencies received appropriations to execute their research and diagnostic missions, out of which operations and maintenance costs had to be funded. Neither ARS nor APHIS received a specific appropriation for operations and maintenance activities. Now, DHS is responsible for operations and maintenance costs as well as programmatic costs that DHS incurs directly. ARS and APHIS continue to receive funding from USDA to support their own programmatic activities at the island. DHS’s and USDA’s efforts to coordinate research and diagnostic programs at Plum Island have been largely successful because of the agencies’ early efforts to work together to bring structure to their interaction at the island. For example, the agencies developed a joint strategy that outlines how they will pursue their shared mission at Plum Island. They also developed formal mechanisms for coordination, and they rely on frequent informal communication among scientists at Plum Island. The scientists also attribute effective coordination and resolution of transition difficulties to skilled management at Plum Island. Our review shows a largely positive experience thus far in the coordination of DHS and USDA activities at Plum Island. The success of the agencies resulted from their early efforts to work together to bring structure to their interactions at the island. The agencies developed a framework for coordination in several stages. First, in accordance with provisions of the Homeland Security Act of 2002, DHS, ARS, and APHIS worked together before the transfer to establish an interagency agreement. The purpose of the agreement is to establish written guidelines that identify each agency’s role and to coordinate immediate operations and maintenance needs, such as fiscal responsibilities and the use of shared equipment. Effective on the day of the transfer, this agreement remained in place while the agencies completed a more detailed strategic plan. Second, a working group, composed of DHS, ARS, and APHIS officials, as well as representatives from nongovernmental producer groups, convened about one month after the transfer to review the island’s mission and priorities and to develop a strategy for coordination. According to a USDA official, DHS recognized that, as a newly established agency, it needed to seek technical expertise through this interagency group. The group began by discussing foreign animal diseases from a broad perspective to inform the new DHS staff about key issues. Subsequent meetings became more focused as stakeholders evaluated the capabilities of the island and its programs, and identified shortfalls and a common priority for the agencies—FMD. The group finalized a joint strategy to address this priority in August 2004. The Joint DHS and USDA Strategy for Foreign Animal Disease Research and Diagnostic Programs (Joint Strategy) serves as the basis for the agencies to prioritize and coordinate work on Plum Island’s two critical functions—conducting research on foreign animal diseases and providing diagnostic services to identify such diseases. The Joint Strategy describes the role of each agency at Plum Island; identifies the agencies’ common goal to address the threat of foreign animal disease introduction; and outlines the activities that DHS, ARS, and APHIS are to perform to fulfill that goal. In particular, the Joint Strategy identifies gaps in the federal government’s effort to address foreign animal diseases and specifies how DHS programs will fill those gaps. For example, DHS will use its resources and expertise to support efficacy testing and advanced development—an identified gap—of improved vaccines for FMD that showed promising results in the early research stages—i.e., basic research—performed by ARS scientists. Under the terms of the Joint Strategy, ARS and DHS will conduct research to develop products, such as vaccines, antivirals, and diagnostic tools, that could be used by APHIS, sold on the market, or both. ARS will continue to focus on the early stages of the work and conduct basic research, which explores generally untested ideas. Examples of recent ARS basic research include obtaining new knowledge about diseases and their causative agents and studying the immune responses of livestock infected with FMD. DHS will augment the ARS work by performing targeted applied research, which is intended to lead to the practical use of the most promising basic research results. Among other things, DHS scientists will work with the results from ARS experiments toward developing those concepts into tangible products that will enhance the nation’s ability to respond to a bioterrorism attack. For example, ARS scientists could prove a vaccine concept in laboratory experiments, while DHS could conduct the efficacy testing of this vaccine, which would lead to securing licenses required for full-scale manufacture of a vaccine product. Finally, the Joint Strategy confirms the role of APHIS to conduct confirmatory diagnostic work, develop and validate diagnostic test methods, support the federal and state network of laboratories intended to quickly respond to disease outbreaks, and train veterinarians to recognize and diagnose foreign animal diseases. The Joint Strategy also identifies ways that DHS will augment the diagnostic role of APHIS. DHS will not initiate diagnostic services at the island, but will contribute to APHIS work by supporting validation and deployment of rapid diagnostic technologies and enhancing training capabilities. For example, DHS has modernized educational equipment used by APHIS to teach students and veterinarians about diagnosing foreign animal diseases. DHS has also established its bioforensic laboratory at Plum Island, and DHS scientists will use this laboratory to validate the forensic assays used for FMD. In addition to the Joint Strategy, the agencies established two other formal mechanisms to ensure that their respective missions are well integrated and to guide routine activities: a Board of Directors and an interagency working group known as the Senior Leadership Group. The agencies also rely on frequent informal communication among scientists and the leadership at Plum Island to further enhance coordination. Composed of top officials from DHS, ARS, and APHIS, the Board of Directors focuses on overall strategic issues and meets on a quarterly basis. The board includes the DHS Director of the Office of Research and Development, Science and Technology Directorate, and the administrators of both ARS and APHIS. The Director of Plum Island, a DHS employee, participates as the Executive Secretary, but is not a member of the board. The board maintains responsibility for coordination and oversight of all matters relating to the management, administration, research strategy, and operations at Plum Island. The board also ensures that the operation of the facility at Plum Island fulfills the agriculture security mission of the Science and Technology Directorate, ARS, and APHIS. On the other hand, the Senior Leadership Group provides local management and focuses on immediate on-site management decisions, such as scheduling use of limited laboratory space. The Plum Island-based leaders from each agency make up the Senior Leadership Group, and they meet on at least a monthly basis. The group’s responsibilities include (1) establishing operational procedures and practices and conducting strategic planning for future needs, (2) ensuring that individuals who use the facility adhere to its operational procedures and practices, (3) scheduling use of the facility and shared equipment, (4) establishing policies for workers to access the facility, (5) reviewing the compatibility of the work performed at the facility with the island’s mission and operations, (6) identifying and coordinating program management for joint projects, and (7) coordinating continuity of operations procedures. The staff we interviewed at Plum Island also said that frequent informal communication among scientists has contributed to effective coordination. According to the Director of Plum Island, scientists discuss their work with one another on an almost daily basis. One scientist noted that the informal dialogue creates a collaborative environment, thereby strengthening their work. The ease of informal communication appears to have resulted in part from existing relationships among the scientists in the three agencies— some of the scientists that now work for DHS at Plum Island previously worked for ARS and APHIS at the island. In addition, the lead scientists we spoke with attributed the effective integration of DHS at the facility in part to the skilled leadership of the Plum Island Director. For example, several scientists believe that the leader’s successful efforts in facilitating open communication among staff have fostered a collaborative environment. Moreover, several noted that the leaders currently based on the island value the comments and ideas expressed by the scientists. One lead scientist concluded that the Director’s ability to establish positive relationships with staff has brought greater focus to the research and diagnostic programs. USDA officials also noted that the leadership of the Director and the entire Senior Leadership Group, working as a team, have contributed to effective cooperation at Plum Island. Finally, while there is now good coordination among the agencies at Plum Island, scientists acknowledged that they experienced some administrative difficulties during the transition period. The scientists we spoke with generally viewed challenges such as these as inevitable given the complexity of transferring responsibility for operations to a new agency and incorporating new programs in the existing facility. For example, one scientist said that the lack of procurement officers initially posed a burden to scientists. He had to perform the duties of a procurement officer— searching for the products, obtaining cost estimates, and completing extensive paperwork—when he needed new supplies and equipment. As a result, this scientist had to forgo some of his limited time in the laboratory and delay his research while he learned how to process procurement orders. This scientist noted, however, that he expected this to be a temporary problem because the agency has since hired administrative staff. DHS officials noted that two procurement officers currently are working at Plum Island, which should alleviate this type of problem in the future. Program budget changes that occurred soon after the transfer—resulting in part from implementation of the Homeland Security Act of 2002—modified overall priorities and the scope of USDA’s work at Plum Island. Traditionally one of the high priorities at Plum Island, FMD has emerged as the facility’s top research priority. According to ARS officials, the agency slowed or terminated other research activities in response to the budget reductions that occurred soon after the transfer of the facility to DHS. Many of the experts we spoke with raised concerns about focusing Plum Island’s research resources on one disease. They also noted that some of the aspects of the research being conducted at the island could be performed elsewhere. With regard to the diagnostic component of Plum Island, APHIS’s priorities have not changed, but APHIS officials told us that budget changes at the time of the transfer curtailed the planned expansion of diagnostic services. DHS is now responsible for all of the costs associated with operating and maintaining Plum Island. In addition, DHS continues to implement major infrastructure improvements and is developing its applied research science and agricultural forensics program. After the transfer, ARS designated FMD—traditionally one of the high- priority diseases at Plum Island—as its top research priority because it poses the greatest threat to the agriculture economy. Also, ARS responded to budget reductions by slowing research on other high-priority diseases, such as classical swine fever, and by terminating research on other diseases, including African swine fever. According to ARS officials, the agency determined the current research priorities—FMD and, to a lesser extent, classical swine fever—using its research plan, which was developed under the agency’s formal planning process, known as the National Program review. In addition to the priorities established by the National Program review, an ARS official told us that the agency also considered other assessments, including those of the White House Office of Science and Technology Policy Blue Ribbon Panel on the Threat of Biological Terrorism Directed Against Livestock. These assessments consistently ranked African swine fever as a lower threat to the United States than FMD and classical swine fever, and ranked FMD as the top threat to the agriculture economy from a deliberate introduction because of its virulence, infectivity, and availability. African swine fever has been perceived as a less imminent threat to the United States because, according to USDA, outbreaks require a vector, such as a tick, to spread the disease. As a result of these assessments, as well as a budget reduction soon after the transfer, ARS officials told us that the agency had to slow the pace of some research projects and terminate others. Specifically, ARS terminated the African swine fever research program, which included genomic sequencing of large DNA viruses, and slowed the pace of work on classical swine fever. While these officials acknowledged the need to make FMD a research priority at Plum Island, they raised concerns about the effect of budget reductions on other diseases of concern. For example, research on classical swine fever, which included development of a marker vaccine, is proceeding at a slower pace than it did before the budget reductions. An ARS official estimated that the reduced funds for classical swine fever research will extend the project timeline about 5 to 10 years. Such delays postpone the development of products that would improve the nation’s ability to respond to and manage an outbreak of disease. Since ARS is no longer responsible for operations and maintenance costs at Plum Island, funds to meet these expenses were transferred to DHS in fiscal year 2003. However, a reduction of ARS’s programmatic funds for research conducted at Plum Island also occurred. ARS budget data show that the agency’s programmatic funds decreased by 45 percent between fiscal years 2003 and 2004. These changes are the result of OMB’s actions to create the first DHS budget for Plum Island in fiscal year 2004. According to an OMB budget examiner, all of the funding for facility operations was transferred to DHS. OMB also divided Plum Island program funds equally between DHS and USDA in fiscal year 2004. ARS negotiated agreements with other government agencies (including DHS) and a nongovernmental entity under which ARS was reimbursed to carry out mutually beneficial research. The amount of these reimbursements equaled about 80 percent of the reduction in the ARS program budget in 2003 after the transfer. For example, in fiscal years 2004 and 2005, ARS received reimbursements from DHS for research ARS performed in support of DHS’s mission. Reimbursements from these agreements, which an ARS official told us are not guaranteed to continue in fiscal year 2006 or beyond, decreased from fiscal year 2004 through 2005. One ARS management analyst noted that the agency cannot factor these reimbursements into program planning because of their inherent uncertainty—such agreements are negotiated as reimbursements on a case-by-case basis after the agency has completed the work. DHS officials stated that it may appear that ARS’s research budget was reduced posttransfer more than it actually was because it is not clear from ARS’s fiscal year 2002 and 2003 budgets how much of those budgets included indirect costs (i.e., research overhead costs) and operations and maintenance costs. ARS’s budget data for fiscal years 2002 and 2003, however, do not distinguish between indirect costs and operations and maintenance costs. According to an ARS official, DHS now pays for some of the indirect research costs at Plum Island, and the agencies continue to negotiate how to share indirect support costs on a case-by-case basis. Table 1 summarizes the net effect of the budget reductions and subsequent funding on ARS’s research resources, exclusive of building and facility funds, at Plum Island for fiscal years 2002 through 2005. Finally, a senior ARS official expressed concern that because of current funding constraints, research at Plum Island does not address other emerging livestock diseases. This official stated that researching other diseases would mitigate some of the uncertainty and better prepare animal health responders, such as veterinarians, to respond to the unknown. In particular, this official emphasized the importance of developing expertise in other foreign animal diseases. Nationally recognized animal disease experts we interviewed agreed that FMD constitutes the greatest threat to American livestock, and, as such, warrants increased attention. Therefore, most of the experts agreed that it is prudent to marshal resources to study FMD at Plum Island. Most of the experts also found it reasonable to terminate research on diseases of lesser importance to the U.S. economy, such as African swine fever. However, all of the experts questioned the wisdom of focusing limited resources almost exclusively on a single disease. Several experts also expressed concern that the focus on a single disease will constrain the development of expertise in other critical diseases, exacerbating the current shortage of talent in this area. For example, one expert told us that there is a shortage of people with an interest in developing expertise in high-priority foreign animal diseases. In fact, nearly all of the experts we interviewed believed that the current work at Plum Island does not adequately address the potential threats posed by deliberate and accidental introductions of foreign animal diseases other than FMD. Specifically, all but one of the experts we consulted said that focusing research on a single disease makes livestock more vulnerable to the diseases that are not being studied to the same extent, or in some cases, at all, such as Nipah virus. Many of these experts emphasized that because it is difficult to predict foreign animal disease outbreaks, it is important to maintain ongoing research on a range of diseases to be better prepared. As a related example, one scientist pointed out that because little was known about West Nile virus, officials were unprepared when the first outbreak occurred in the United States in 1999. West Nile is a disease that can be fatal to humans, horses, and birds. The first case of West Nile virus in the United States was detected in New York, and the disease spread to an additional 48 states by 2003. An ARS official acknowledged the limitations of focusing research on a single disease and commented that ARS would like to do more research on emerging diseases to be better prepared for the unknown. DHS and ARS officials caution that resource and facility constraints would make it difficult to expand the current research portfolio at Plum Island. Also, such a portfolio would require significantly more stringent biosecurity than is currently in place at the island if research were performed on diseases that could affect both animals and humans. Some diseases of concern that are not currently being studied at Plum Island include Nipah virus and Rift Valley fever. Members of a blue-ribbon threat assessment panel pinpoint these diseases, which affect both humans and livestock, as warranting greater attention because an outbreak could result in economic disruption or interfere with trade. Some of the experts we interviewed also said that Rift Valley fever research is needed. Research conducted outside of Plum Island on Nipah virus and Rift Valley fever is very limited. At the DHS-funded Center of Excellence at Texas A&M University there are plans to develop a vaccine for Rift Valley fever, but there is limited laboratory space to conduct this type of work on large animals and, therefore, researchers at the center cannot test the vaccine on large animals. The Texas A&M Center of Excellence anticipates that it will rely on institutions overseas, such as the Onderstepoort laboratories in South Africa, to conduct such tests. DHS and USDA officials told us that in order to study Rift Valley fever on large animals at Plum Island, individuals involved with the research would require a vaccination. Alternatively, Plum Island would need to enhance its biosafety procedures to comply with the stricter biosafety level 4 standards. A DHS official noted that at the time of the transfer of Plum Island, the Homeland Security Secretary pledged to the nearby communities that DHS would not seek a more stringent biosafety designation for the facility. Other experts commented on other factors that limit research on foreign animal diseases. For example, one expert commented that while Plum Island plays a critical role in the national effort to address foreign animal diseases, researchers at this facility cannot study every foreign animal disease of concern, especially given the resource constraints and that the staff do not have expertise in other diseases, such as vector-borne diseases. This expert believes that collaborations between Plum Island and other research institutions would benefit the United States by enhancing the nation’s knowledge in areas that researchers would otherwise not be able to address at Plum Island. Several experts suggested that DHS and USDA might use the Plum Island facility more effectively by limiting its research agenda to live infectious agents that can be studied only there and allowing other institutions to perform the work that does not require the stringent safety features of Plum Island. For example, researchers in other institutions could develop vaccines without using a live form of infectious agents or model disease outbreaks. One expert told us that researchers could answer questions through modeling and risk assessment that would be based on the data generated from tests using animals at Plum Island. Another way to maximize space resources at Plum Island may be to shift work on domestic animal diseases off the island. An expert we consulted said that doing this work at Plum Island decreases the island’s already limited resources available to study foreign animal diseases. For example, this expert regards vesicular stomatitis—a disease often mistaken for FMD—as inappropriate for Plum Island because it is a domestic disease and is not highly contagious. Other experts highlighted the value of studying this disease—in part to provide researchers or responders with experience in distinguishing this domestic disease from FMD—but some noted that it might be more appropriate to study it in other laboratories in the mainland United States. USDA commented that it is necessary for the agency to conduct its research on vesicular stomatitis at Plum Island because scientists are working with samples that may be contaminated with FMD. In addition, USDA commented that another benefit from maintaining research on vesicular stomatitis at Plum Island is that such work enables the agency to retain staff trained to work with diseases that affect humans and animals. DHS officials stated that, in their opinion, this type of work constitutes a minimal percentage of Plum Island's workload; a senior ARS official concurred and estimated that this work accounts for roughly 5 percent of the ARS research funds at Plum Island. According to DHS, the agency is exploring opportunities to involve other research institutions. For example, the DHS officials noted that recently Plum Island officials have begun to assess what work could be moved off the island to other research facilities while taking into consideration what parts of the combined research tasks can be possibly conducted off of the island. A DHS official told us that the agency has tapped Lawrence Livermore National Laboratory to coordinate closely with Plum Island researchers and develop diagnostic and detection tools for FMD, and demonstrate the performance of such tools in the field. Also, a researcher at the DHS Center of Excellence at Texas A&M stated that the center is investigating genetic methods for preventing FMD, deferring portions of the research requiring use of the live virus to Plum Island; there, a smaller team can handle the virus in a laboratory setting that meets the stringent safety standards. Finally, USDA commented that ARS has established collaborative relationships with eight universities and two other institutions to accomplish its research mission. According to APHIS officials, before the transfer of Plum Island to DHS, they expected to receive a $2.3 million increase in funding, which Congress had approved in February 2003 as part of the agency’s appropriations. APHIS was expecting this increased funding to meet rising demand for diagnostic services. Specifically, the 2001 FMD outbreak in the United Kingdom and the emphasis on bioterrorism prompted a shift from passive foreign animal disease surveillance to a more active approach. These events underscored the need for additional staff. In addition, APHIS had assumed responsibility for establishing the validity of rapid diagnostic tools to be used by scientists in a national network of state veterinary laboratories. However, APHIS officials told us that as a result of the transfer, the $2.3 million increase that APHIS officials were expecting to receive was not fully realized. According to budget documents, APHIS had expected to allocate a total of $4.3 million in fiscal year 2004 to diagnostic work at Plum Island, which included the $2.3 million. Instead, half of this amount—$2.1 million—was allocated to the DHS budget for Plum Island that year. OMB decided to use the APHIS fiscal year 2003 budget allocation—which included the $2.3 million—as a base to determine how much money APHIS and DHS should receive in fiscal year 2004. Additionally, OMB transferred a portion of APHIS’s fiscal year 2003 programmatic funds (about $332,000) to cover DHS’s new responsibility for operations and maintenance at Plum Island. This change in fiscal year 2003 funding for APHIS occurred because the Homeland Security Act authorized the President to establish initial funding for DHS by transferring funds from other agencies. Although APHIS officials understood that APHIS’s budget for Plum Island would decrease when operations and maintenance funds were allocated to DHS, they did not expect this further reduction in programmatic funds. APHIS officials noted that although they remain committed to the same diagnostic priorities at Plum Island, the transfer to DHS has strained their diagnostic capabilities at Plum Island. They said their plans to hire more scientists and train more veterinarians to recognize foreign animal diseases were seriously curtailed because they did not receive the anticipated increase. The officials told us that anticipated enhancements to the diagnostic tools at Plum Island would have facilitated a faster response to an outbreak. In fact, an APHIS official told us that, at current funding levels, APHIS staff are able to focus only on validating tests for the highest- priority diseases, such as FMD, and that APHIS lacks the staff and resources to develop tests for other high-priority diseases, such as Rift Valley fever and other emerging diseases. APHIS officials concluded that Plum Island, which is the only place in the United States where hands-on training on high-priority foreign animal diseases affecting livestock can be provided, lacks the capacity to accommodate the increased demand for such training. DHS officials noted that, since assuming responsibility for Plum Island, the agency has funded a pilot program to provide distance learning via audiovisual equipment. While the distance training does not provide students with the desirable hands-on experience of observing and diagnosing foreign animal diseases, DHS stated that this tool has augmented the capability of the Foreign Animal Disease Diagnostician Course by providing instruction to practitioners in locations beyond Plum Island. Though APHIS funding was reduced after the transfer, DHS has reimbursed APHIS to perform diagnostic work at Plum Island in fiscal years 2004 and 2005. For example, in fiscal year 2004, DHS and APHIS negotiated an Economy Act agreement that enabled APHIS to retain eight new scientists—a key step in carrying out its planned expansion of diagnostic services. This agreement covered salary and benefits for eight new APHIS employees rather than ongoing APHIS program costs at Plum Island. The sum of the 2004 DHS reimbursement and the 2004 allocation to the APHIS laboratory at Plum Island are roughly equivalent to the APHIS program budget in the fiscal year before the transfer. However, APHIS officials do not view these reimbursements—referred to as Economy Act agreements—as an appropriate way to fund the agency’s diagnostic work. These officials said that the purpose of the agreements was “to avoid duplicating functions” performed by the agencies at Plum Island, such as caring for the animals, and noted that they do not expect to negotiate additional agreements directly related to the planned expansion. Because the reimbursements obtained through Economy Act agreements have decreased in 2005 and recent congressional appropriations have not been sufficient to support the additional eight scientists, APHIS officials expressed concern about the agency’s ability to retain these scientists. DHS officials concurred with APHIS’s view that Economy Act agreements are not an appropriate way to fund the agency’s diagnostic work at Plum Island. Table 2 summarizes the net effect of the budget reductions and subsequent funding received through interagency agreements on APHIS’s overall resources at Plum Island for fiscal years 2002 through 2005. As discussed elsewhere in this report, DHS has assumed responsibility for operations and maintenance at Plum Island and has developed its own applied research program. As part of the 2003 transfer authorized by the President, DHS received approximately $33 million for building and facility funds from ARS and APHIS. In addition to the routine operations and maintenance needs at the facility, the DHS budget at Plum Island includes funds that allow the agency to conduct major infrastructure improvements at the facility. External assessments of the Plum Island facility as well as the agency’s own evaluation revealed safety and security issues that the agency needed to resolve. DHS’s budget included $5.9 million in fiscal year 2004 and $12.9 million in fiscal year 2005 to conduct these improvements at the facility, such as the installation of closed-circuit television surveillance to control and monitor access to the containment area in the laboratory. DHS officials told us that the security and safety upgrades at Plum Island have increased the funding needs to operate the facility. The programmatic funds for DHS—which support the agency’s applied research science and agricultural forensics work—accounted for $8.3 million of the $51 million total allocated to the agency for Plum Island in fiscal year 2005. As of August 2005, DHS’s applied research science team— which focuses primarily on developing vaccines for FMD—included seven scientists and support staff. DHS has also used its programmatic funds to establish a bioforensics laboratory at Plum Island, which will, according to the agency, validate forensic assays for FMD as well as classical swine fever. DHS and USDA officials will continue to pursue their current agreed-upon joint activities, which focus on FMD, and they are assessing longer-term objectives for future joint work at Plum Island or elsewhere. Agency officials did not consider it prudent to speculate on long-term objectives of joint work, in part, because DHS plans to replace the existing Plum Island facility, and aspects of the new facility have not yet been determined. Although DHS and USDA officials told us they plan to continue to work together on FMD, they are currently assessing the longer-term objectives of future joint work at Plum Island or elsewhere. DHS and USDA have established FMD as the immediate top priority for Plum Island, but they have not yet identified which diseases, if any, they will address together after FMD. In fact, the Joint Strategy provides a blueprint for coordinating efforts to address FMD but does not currently address work on other diseases. DHS officials told us that the agency remains committed to studying the highest-priority livestock diseases at Plum Island and will decide which diseases to study based on a scientific assessment of the highest threats. DHS and USDA officials confirmed that if they decide to conduct joint activities on other diseases, they will rely on the Joint Strategy and the mechanisms they established to implement this strategy— such as the Board of Directors—to coordinate the effort. DHS officials emphasized that the dynamic nature of threat assessments makes it difficult to firmly commit to long-term priorities because information and research needs may change frequently depending on the nature of the threat. In terms of USDA research priorities, ARS will establish its research objectives for the next 5 years at the 2005 National Program review and assessment. An ARS official told us that in the near term, the agency would like to conduct more work on classical swine fever, though not at the expense of FMD research. This official noted that no decisions have been made as to whether DHS will coordinate with ARS to address classical swine fever, and that the work on this disease has not yet advanced to a stage that would involve DHS and its applied research capabilities. Several of the experts we interviewed agreed that, currently, the prioritization of foreign animal disease threats produces the same ranking of diseases whether the threat is based on an accidental or a deliberate introduction; therefore, the experts stated that the current focus on FMD addresses the disease posing the greatest threat through both accidental and intentional introduction. However, the rise of new threats may disrupt the alignment of the agencies’ priorities and, in turn, affect the possibility of joint activities. For example, one top ARS official told us that the agencies’ research and diagnostic priorities at Plum Island may not continue to be so closely aligned in the future because, in his view, the agencies have different missions. DHS officials noted that the agencies’ missions are, in fact, closely aligned because DHS is also responsible for protecting against the accidental introduction of foreign diseases. They also noted that the agency’s ranking of diseases would follow a formal risk analysis to prioritize foreign animal diseases based on threat. Based on our analysis of documents such as the Joint Strategy for Plum Island, we believe that DHS’s mission to protect agriculture is more oriented toward intentional attacks on agriculture, and, therefore, we expect the agency will continue to focus more on diseases that could be introduced deliberately than on diseases that could accidentally break out in the United States. Furthermore, officials told us it is premature to firmly commit to long-term objectives of joint work at Plum Island, in part, because DHS has plans to replace the existing facility with a new, modernized facility. Recognizing the shortcomings of the laboratory facilities at Plum Island—insufficient space and outdated infrastructure—a senior DHS official told us the agency will construct this facility, pending congressional approval, to expand its capabilities to defend the nation’s agricultural infrastructure against terrorist attacks. DHS officials told us, however, that they have not yet determined the scope of the work to be performed at this new facility, or the facility’s size or location—whether Plum Island or elsewhere—and do not know the extent to which the new facility will carry out the current mission of Plum Island. For example, DHS officials told us the agency has not determined whether the new facility will address such research gaps as the lack of an approved laboratory to study highly contagious viruses like Nipah virus, which require higher biosecurity standards than those in place at Plum Island. Some DHS and USDA officials speculated that the existing ARS and APHIS programs at Plum Island would move with the DHS applied research program to the new facility, but regardless of the facility’s location, the agencies are considering their options. DHS has convened a scientific working group, including representatives from DHS, ARS, APHIS, and the Department of Health and Human Services, to discuss the options for a new facility. DHS estimates that, pending congressional approval, it will become fully operational by 2012. Although quite successful in terms of interagency cooperation, the transfer of Plum Island from USDA to DHS highlights the challenges that the agencies face in meeting diagnostic and research needs with available resources. The limits on funding and on the availability of laboratory space at Plum Island underscore the importance of leveraging available resources and expertise elsewhere in the country. While Plum Island is the only facility in the United States where scientists are currently authorized to study diseases using certain highly contagious pathogens in large animals, other important work related to these diseases could be conducted in other institutions. As DHS evaluates the size and capabilities of the new foreign animal disease facility that the agency estimates will be completed by 2012, it will be important to explore the cost-effectiveness of shifting some current work, such as research that does not involve the use of live agents, to other laboratories and reserve the limited laboratory space at Plum Island for work that can only be performed in that facility. To make more effective use of Plum Island’s limited laboratory space in the short term, we recommend that DHS’s Science and Technology Directorate, in consultation with USDA’s Agricultural Research Service and the Animal and Plant Health Inspection Service, pursue opportunities to shift work that does not require the unique features of Plum Island to other institutions and research centers. We provided a draft of this report to DHS and USDA for their review and comment. DHS generally concurred with the report and said that it accurately reflects the current relationships and coordination between DHS and USDA at Plum Island. DHS also agreed with the recommendation and said the agencies have already addressed the issue. For example, DHS commented that the agency’s assessment—currently under way—of laboratory and animal room requirements at Plum Island includes addressing the agencies’ options for shifting work to institutions off of the island. While we view the steps DHS has taken toward implementing the recommendation as positive, the agency has not completed these tasks. We believe that DHS needs to consult with USDA and conduct more work to demonstrate consideration of opportunities to shift work elsewhere. DHS also provided technical comments, which we incorporated, as appropriate. DHS’s written comments and our detailed response appear in appendix IV. USDA generally agreed with the recommendation and found the report to be factual and generally positive in recognizing the coordination of activities between DHS and USDA. USDA commented that it would continue to evaluate the working relationship with DHS. USDA also provided some clarifying points. For example, USDA noted that while ARS had to reduce efforts on classical swine fever because of budget reductions, it has made significant advances toward the development of a marker vaccine for classical swine fever. USDA also elaborated on our discussion of vesicular stomatitis virus research, and clarified the benefits of conducting such work at Plum Island. Finally, USDA stated that while the recommendation is sound and supported by the agency, the recommendation could be misleading because little of the work can be performed elsewhere and it would be difficult to transfer such work. We have incorporated the clarifications, as appropriate. We also note that although work done at Plum Island that does not require containment may not be easily removed or relocated, it is an important step to take in order to use the facility’s limited resources effectively and to be prepared to respond to outbreaks of various foreign animal diseases. USDA also provided technical comments, which we incorporated, as appropriate. USDA’s written comments and our detailed response appear in appendix V. We are sending copies of this report to the appropriate congressional committees, the Secretaries of Homeland Security and Agriculture, and other interested parties. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3841 or robinsonr@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix VI. To determine how the Department of Homeland Security (DHS) and the U.S. Department of Agriculture (USDA) coordinate research and diagnostic activities at Plum Island, we analyzed DHS and USDA joint strategy documents, including an interagency agreement between DHS and USDA for Plum Island, the Joint DHS and USDA Strategy for Foreign Animal Disease Research and Diagnostic Programs, and the Plum Island Animal Disease Center Charter. In addition, we reviewed Homeland Security Presidential Directives 9 and 10 to understand the roles for DHS and USDA in addressing the threat of agricultural terrorism. We interviewed officials at various levels from each agency, including senior leadership officials based in Washington, D.C., the facility’s on-site leadership, and, during a visit to Plum Island, all of the lead scientists. We also interviewed former USDA scientists who have left Plum Island since its transfer to DHS on June 1, 2003. To determine what changes, if any, have taken place regarding research and diagnostic priorities at Plum Island since the facility was transferred to DHS, and the reasons for and implications of such changes, we interviewed the current and two former Plum Island directors, spoke with current and former Plum Island scientists, and discussed research and diagnostic priorities with senior officials in the DHS Science and Technology Directorate and USDA’s Agricultural Research Service (ARS) and Animal and Plant Health and Inspection Service (APHIS). To understand Plum Island’s budget, we also interviewed analysts and officials at the agencies and at the White House Office of Management and Budget, which developed and oversaw the DHS budget during the creation of the agency. In addition, we analyzed agency budget documents for fiscal years 2002 through 2006 to identify changes in funding levels before and after the transfer of Plum Island and to determine the funding allocations among the programs at Plum Island. We also conducted structured interviews in person or via telephone with recognized nongovernment experts from academic and other research organizations that we chose for their diverse perspectives and technical expertise on animal health and diseases. In particular, we sought to obtain their comments on research and diagnostic priorities at Plum Island. We based our initial selection of experts on a list of stakeholders invited to participate in the ARS’s National Program Review Workshop, which met on September 20-21, 2005, in Kansas City, Missouri, to provide feedback on ARS priorities and national research programs. From the list of workshop participants, we identified 13 stakeholders who do not work at Plum Island and who study foreign animal diseases or serve as members in organizations that address foreign animal diseases. This list included some recognized experts who have served on reputable committees assessing the threats of animal diseases, including the White House Office of Science and Technology Policy Blue Ribbon Panel on the Threat of Biological Terrorism Directed Against Livestock. We identified an additional two contacts through referrals from these stakeholders. From these 15 contacts, we selected the final 11 experts on the basis of the following criteria: (1) recommendations we received from others knowledgeable in the field of foreign animal diseases; (2) area of expertise and experience; and (3) type of organization represented, including academic institutions and associated research centers. To examine the long-term objectives of joint activities at Plum Island, we analyzed agency planning documents and interviewed senior leadership officials representing DHS and USDA. We also discussed with DHS and USDA officials the status and possible outcomes of a DHS feasibility study to upgrade the Plum Island Animal Disease Center. We conducted our review from March 2005 to December 2005 in accordance with generally accepted government auditing standards. Roger Breeze, Ph.D., M.R.C.V.S. Chief Executive Officer, Centaur Science Group, Washington, D.C. Former Director, Plum Island Animal Disease Center. Corrie Brown, Ph.D., D.V.M. Professor and Coordinator of International Activities, Department of Veterinary Medicine, University of Georgia, Athens, Georgia. Neville Clarke, Ph.D., D.V.M. Director, National Center for Foreign Animal and Zoonotic Disease Defense, College Station, Texas. Peter Cowen, Ph.D., D.V.M., M.P.V.M. Associate Professor of Epidemiology and Public Health, Department of Population Health and Pathobiology, College of Veterinary Medicine, North Carolina State University, Raleigh, North Carolina. Linda L. Logan, Ph.D., D.V.M. USDA APHIS Attache serving North Africa, East Africa, the Middle East and the Near East, Cairo, Egypt. Peter W. Mason, Ph.D. Professor of Pathology, Professor of Microbiology and Immunology; Senior Scientist, Sealy Center for Vaccine Development; member, Center for Biodefense and Emerging Infectious Diseases, University of Texas Medical Branch, Galveston, Texas. James A. Roth, Ph.D., D.V.M. Distinguished Professor of Immunology; Assistant Dean, International Programs and Public Policy; and Director, Center for Food Security and Public Health, College of Veterinary Medicine, Iowa State University, Ames, Iowa. M.D. Salman, Ph.D., M.P.V.M., D.A.C.V.P.M., F.A.C.E. Professor and Director of Animal Population Health Institute, College of Veterinary Medicine and Biomedical Sciences, Colorado State University, Fort Collins, Colorado. Mark C. Thurmond, Ph.D., D.V.M. Professor, Department of Medicine and Epidemiology, University of California, Davis, California. Alfonso Torres, Ph.D., D.V.M. Executive Director, New York State Animal Health Diagnostic Laboratory, and Associate Dean for Veterinary Public Policy, College of Veterinary Medicine, Cornell University, Ithaca, New York. David H. Zeman, Ph.D., D.V.M. Department Head, Veterinary Science Department; Director, Animal Disease Research and Diagnostic Laboratory; and Director, Olson Biochemistry Laboratories, South Dakota State University, Brookings, South Dakota. We also sought the perspective of agricultural producers: Gary Weber, Ph.D. Executive Director, Regulatory Affairs, National Cattlemen’s Beef Association, Washington, D.C.; and National Pork Board. The table below presents information about key aspects of animal diseases that can affect livestock mentioned in the report, including the animals affected, transmission route, and vaccine ability. The following are GAO’s comments on the Department of Homeland Security’s letter dated November 22, 2005. 1. Regarding DHS’s comment that the scope of its research program is not limited to FMD, our report notes that the DHS-funded Center of Excellence has plans to develop a vaccine for Rift Valley fever. In addition, we have modified the report to include a statement that DHS funds are being allocated to the development of a vaccine for Rift Valley fever in fiscal year 2006. 2. Regarding DHS’s assertion that its mission includes enhancing protection against major disease outbreaks, our report states that DHS’s mission to protect agriculture includes responsibilities to address introductions of high consequence foreign animal diseases that could be either deliberately or accidentally introduced. However, we continue to believe that DHS’s mission to protect agriculture is more oriented toward intentional attacks on agriculture. First, the Homeland Security Act of 2002 states that DHS’s primary mission is to prevent terrorist attacks within the United States. Second, the information DHS provided about its role at Plum Island has emphasized deliberate introductions. For example, the Joint Strategy emphasizes the bioterrorism focus of DHS work at Plum Island in describing the agency’s mission “to conduct, stimulate, and enable research and development to prevent or mitigate the effects of catastrophic terrorism.” The Joint Strategy also states that DHS will “focus on identified research and development gaps specifically targeted to strengthen the nation’s ability to anticipate, prevent, respond to, and recover from the intentional introduction of a high consequence foreign animal disease.” 3. Although DHS said that the Board of Directors meetings included a discussion of what work could be conducted off the island, USDA officials disagree with this statement. Furthermore, while we understand that the Board of Directors has met on several occasions, we do not have evidence to support that a discussion about maximizing space resources occurred at the meeting. We also have not seen an outcome of discussions regarding shifting work to other institutions. 4. Regarding DHS’s comment that the Senior Leadership Group has instituted a room reservation system that takes into consideration work that can be shifted elsewhere, our report states that the Senior Leadership Group has implemented a system to ensure efficient use of limited space at Plum Island. We have modified the report to note that in the case of limited space, the Senior Leadership Group would, as part of its review of the proposed projects, evaluate whether the work could be done at another location. However, as our report states, space is already limited at Plum Island, constraining research and diagnostic work that can be performed at the facility. We have not seen evidence that this group has formally evaluated the feasibility of shifting work from Plum Island to other research institutions in order to overcome resource constraints. 5. We are encouraged to hear that DHS is in the process of assessing the laboratory and animal room requirements for all three agencies at Plum Island for the next 6 years and, as part of this assessment, will address each agency’s options for performing activities off of the island through other facilities, contract research organizations, and the like. However, because the assessment has not been completed yet, and we have not seen evidence that DHS is conducting this review in conjunction with USDA, we continue to believe that the agencies have not identified opportunities to shift work that does not require the unique features of Plum Island to other institutions and research centers. The following are GAO’s comments on the U.S. Department of Agriculture’s letter dated November 30, 2005. 1. Regarding USDA’s comments about ARS’s continued focus on classical swine fever and its advances in developing a marker vaccine for this disease, our report notes that this disease is a high priority. We modified the report to include USDA’s view that while ARS has had to reduce efforts on classical swine fever due to budget reductions, it has made significant advances toward the development of a marker vaccine for classical swine fever. 2. Regarding USDA’s comments about the value of working on vesicular stomatitis virus at Plum Island, our report summarizes the conflicting views of experts regarding the need for such work at Plum Island. We have modified the report to summarize why USDA believes it is important to maintain research on vesicular stomatitis virus at Plum Island. 3. Regarding USDA’s comment on the transfer of programmatic funds from ARS and APHIS to DHS for a related but distinct area of work, our report states that after the transfer, there have been increased demands for the facility’s limited space and resources related to research and diagnostic activities. Our conclusions summarize the challenges the agencies face in meeting research and diagnostic needs with available resources, and form the basis of our recommendation that DHS’s Science and Technology Directorate work with USDA’s ARS and APHIS to pursue opportunities to make more effective use of Plum Island’s limited laboratory space. 4. Regarding USDA’s comments on the recommendation to pursue opportunities to shift work that does not require the unique features of Plum Island to other institutions and research centers, we recognize that not all such work may be relocated or easily removed. For example, as our report notes, any work involving a live FMD agent would have to be conducted at Plum Island. Furthermore, the report states that Plum Island is the only facility that has special safety features required to study certain high consequence foreign animal diseases in large animals. However, we continue to believe that there are opportunities to shift work to other institutions. For example, experts identified work that could be done outside of Plum Island, such as developing vaccines without using the live form of the agents. This work is important in order to remain prepared to respond to outbreaks of various foreign animal diseases. 5. Regarding USDA’s comment on modeling, we modified our report to clarify that modeling activity does not occur in containment. In addition to the contact named above, Maria Cristina Gobin (Assistant Director), Kate Cardamone, Nancy Crothers, Mary Denigan-Macauley, Lynn Musser, Omari Norman, Joshua Smith, and Lisa Vojta made key contributions to this report. Sharon Caudle, Elizabeth Curda, Denise Fantone, Terry Horner, Katherine Raheb, Keith Rhodes, and Steve Rossman also made important contributions.
The livestock industry, which contributes over $100 billion annually to the national economy, is vulnerable to foreign animal diseases that, if introduced in the United States, could cause severe economic losses. To protect against such losses, critical research and diagnostic activities are conducted at the Plum Island Animal Disease Center in New York. The Department of Agriculture (USDA) was responsible for Plum Island until June 2003, when provisions of the Homeland Security Act of 2002 transferred the facility to the Department of Homeland Security (DHS). Under an interagency agreement, USDA continues to work on foreign animal diseases at the island. GAO examined (1) DHS and USDA coordination of research and diagnostic activities, (2) changes in research and diagnostic priorities since the transfer, and (3) long-term objectives of joint activities at Plum Island. DHS and USDA's coordination at Plum Island Animal Disease Center has been largely successful because of the agencies' early efforts to work together to bring structure to their interactions at the island. For example, prior to the transfer, officials from DHS and USDA worked in concert to develop a written interagency agreement--effective when the island was transferred to DHS--that coordinated management activities. Subsequently, DHS and USDA created a detailed strategy to guide their joint work on foreign animal disease research and diagnostics. According to this joint strategy, DHS's role is to augment the research and diagnostic work that USDA's Agricultural Research Service (ARS) and the Animal and Plant Health Inspection Service (APHIS) conduct at the island. Since the transfer, budget changes, in part, have modified overall priorities and the scope of work at the island. First, ARS narrowed its research priorities to focus its work primarily on a single foreign animal disease, foot-and-mouth disease (FMD). Traditionally one of the high-priority diseases studied at Plum Island, FMD has emerged as its top research priority because, according to officials, it poses the greatest threat of introduction because of its virulence, infectivity, and availability. Other research programs have been terminated or are proceeding at a slower pace. National experts we consulted confirmed the importance of studying FMD, but stated that it is also important to study a variety of other diseases to remain prepared. They suggested that, to free up limited space at the facility, some of the work that does not require the unique features of Plum Island could be performed elsewhere: for example, work that does not involve the use of a live virus, such as certain aspects of vaccine development. Second, while APHIS's overall priorities have not changed, diagnostic work has been curtailed. Officials said that, after the transfer, because the agency did not receive an expected budget increase, their plans to expand development of diagnostic tools for high-priority diseases were curtailed. This work is vital to rapidly identifying diseases when outbreaks occur. APHIS officials told us that the funds to support work on diagnostic tools remain insufficient. Finally, DHS has assumed responsibility for operations and maintenance at Plum Island and has established an applied research science and agricultural forensics team. While DHS and USDA plan to continue to work together on FMD, agency officials told us that it is not prudent to speculate on long-term objectives at Plum Island, in part, because DHS has plans to replace the Plum Island Animal Disease Center with a new, modernized facility that could be located at Plum Island or elsewhere. Pending congressional approval, DHS estimates that the new facility will be fully operational by 2012.
USDA’s Food and Consumer Service (FCS) administers WIC through federal grants to states for supplemental foods, health care referrals, and nutrition education. To qualify, WIC applicants must show evidence of health or nutritional risk that is medically verified by a health professional. In addition, participants must have incomes at or below 185 percent of the poverty level. In 1997, for example, the WIC’s annual limit on income for a family of four is $29,693 in the 48 contiguous states and the District of Columbia. WIC operates in the 50 states, at 33 Indian tribal organizations, and in the District of Columbia, Guam, the U.S. Virgin Islands, American Somoa, and the Commonwealth of Puerto Rico. These 88 government entities administer the program through more than 1,800 local WIC agencies. These agencies typically are a public or private nonprofit health or human services agency; they can be an Indian Health Service Unit, a tribe, or an intertribal council. Local WIC agencies serve participants through the clinics located in their service area. Most WIC food benefits are provided to participants through vouchers or checks that can be issued every 1, 2, or 3 months. These vouchers allow participants to purchase a food package designed to supplement their diet. The foods they can purchase through WIC are high in protein, calcium, iron, and vitamins A and C; they include milk, juice, eggs, cereal, and, where appropriate, infant formula. The value of the food package varies by state and by the participants’ nutritional needs. The average value of the monthly food package in 1996 for all participants nationwide, excluding infant formula, was $43.54. Families with infants using formula obtained a package valued at about $82. WIC was established in 1972 by Public Law 92-433, which amended the Child Nutrition Act of 1966. In 1989, the act was amended to require that state agencies improve access to WIC for working women by making changes that minimize the time they must spend away from work when obtaining WIC benefits. The directors of local WIC agencies generally estimated that working women represented between one-tenth and one-half of all those served in their clinics, although few agencies collect data on the number of working women. Nationwide, virtually all local WIC agencies have implemented strategies to increase the accessibility of their clinics for working women. The most frequently cited strategies—used by every agency—are scheduling appointments instead of taking participants on a first-come, first-served basis and allowing a person other than the participant (an alternate) to pick up the food vouchers. Other, less frequently cited strategies, which are still used by more than half of the agencies, are issuing vouchers for more than 1 month at a time, offering appointments during the lunch hour, expediting clinic visits, and mailing vouchers to participants. Fewer directors use strategies that extend clinic hours beyond the typical workday—Saturday, early morning, or evening hours—or located clinics at participants’ work or day care sites. Figure 1 illustrates the frequency of use for 10 strategies. As shown in figure 1, each of the six strategies—scheduling appointments, using alternates, issuing multiple vouchers, offering lunch hour appointments, expediting clinic visits, and mailing vouchers to participants, are used by more than half of the local WIC agencies. More specifically: Scheduling appointments. All local WIC agencies offer participants the convenience of scheduling their appointments. Scheduling appointments reduces a participant’s waiting time at the clinic. Furthermore, Kansas state officials told us that they recommend that local WIC agencies schedule appointments for participants in order to make more efficient use of the agency staff’s time. Using alternates. All local WIC agencies allow a person designated as an alternate to pick up food vouchers and nutrition information for the participant, thus reducing the number of visits to the clinic by working women. California state officials told us that they allow the use of alternates statewide and that many participants designate a relative or baby-sitter as an alternate. At one local WIC agency we visited in Pennsylvania, officials told us that alternates, such as grandmothers who provide care during the day, can benefit from the nutrition education because they may be more familiar with the children’s eating habits than the parents. Issuing vouchers for multiple months. Almost 90 percent of local WIC agencies issue food vouchers for 2 or 3 months. California state officials said that issuing vouchers every 2 months to participants who are not at medical risk reduces the number of visits to the clinic. Offering lunch hour appointments. Three-fourths of local WIC agencies had some provision for lunch hour appointments. All of the local agencies we visited in California operate at least one clinic in their service area during the lunch hour, which allows some working women to take care of their WIC visit during their lunch break. Expediting clinic visits. Two-thirds of local WIC agencies took some action to expedite clinic visits for working women to minimize the time they must spend away from work. For example, a local agency official in New York State stated that the agency allows women who must return to work to go ahead of others in the clinic. The director of a local agency in Pennsylvania told us the agency allows working women to send in required paperwork before they visit, thereby reducing the time spent at the clinic. The Kansas state WIC agency generally requires women to participate in the program in the county where they live, but it will allow working women to participate in the county where they work when it is more convenient for them. Finally, one local agency in Texas remodeled its facilities to include play areas where children could be entertained during appointments. Not having to spend time minding their children decreases the amount of time that women need for visits. Mailing vouchers. About 60 percent of the local WIC agencies, under special circumstances, mail food vouchers to participants. Mailing vouchers eliminates the need for a visit to the clinic. Officials at all of the state agencies we visited allow vouchers to be mailed but are generally very cautious in using this strategy. Both state and local agency directors told us that mailing vouchers eliminates the personal contact and nutrition information components of the program. One local agency director in Pennsylvania told us that she mailed vouchers to rural participants during a snowstorm when the agency van could not get to scheduled locations. Three of the four less frequently used strategies shown in figure 1—Saturday, early morning, and evening hours—increase clinic hours beyond the regular workday. The fourth strategy—selecting clinic locations because they are at participants’ work sites or day care providers—is the strategy least frequently cited. More specifically: Expanding clinic hours—Saturday, early morning, and evening hours. Offering extended hours of operation beyond the routine workday is an infrequently used strategy. About one-fifth of the local WIC agencies offer early morning hours—before 8 a.m.—at least once a week, and about one-tenth offer clinic hours on Saturdays at least once a month. Just under half of the agencies are open during evening hours—after 6 p.m.—once a week. At least one-fourth of the participants do not have access to any clinic hours outside the regular workday. The directors of local WIC agencies offered a variety of reasons for not offering extended hours of operation. For example, about 8 percent of these agencies had previously offered Saturday hours. Directors for several agencies said that they had discontinued this practice because participation was not high enough to warrant remaining open on Saturdays. Other reasons cited were an insufficient number of staff to allow for expanded clinic hours (79 percent), the staff’s resistance to working hours other than the routine workday (67 percent), and a lack of security in the area after dark (42 percent). For example, at one agency we were told about two recent homicides after dark near one of the clinics. This clinic limits evening hours to one evening each month, and at closing time, the staff exit together to the parking lot across the street. In addition, in two states we visited, the clinic staff do not have access to their statewide computer system in the evenings or on Saturdays, which reduces efficiency in processing paperwork and discourages operating during extended hours. Clinic locations. About 5 percent of local WIC agencies selected a location for one or more of their clinics because it is at or near a work site. For example, one Texas agency operates a clinic twice a month at a poultry farm in an area where several such farms employ women who are WIC participants. In California, two local WIC agencies we visited have clinics at nearby military bases. One has a clinic at an Air Force base, and the other has six clinics at various installations—two at Marine bases and four at Navy locations. Similarly, about 5 percent of local WIC agencies selected clinic locations because they are day care sites for participants. For example, according to a director of a local WIC agency in Texas, she operates a clinic once a month at a day care site used by 71 women who participate in WIC. Operating a clinic at this location is a convenience for the participants. About 76 percent of the directors of local WIC agencies believed that accessibility to their clinics is at least moderately easy for working women, as measured by such factors as convenient hours of operation and reasonable waiting time at the clinics. However, about 9 percent of the directors believed that accessibility is still a problem for working women. Figure 2 shows the directors’ rating of their clinics for accessibility. Despite the widespread use of strategies to increase accessibility, some directors reported that accessibility is still problematic for working women. In our discussions with these directors, the most frequently cited reason for rating accessibility as moderately difficult or very difficult is the inability to operate during the evening or on Saturday. As previously noted, directors provided several reasons for not offering extended hours, including the lack of staff, staff’s resistance to working schedules beyond the routine workday, or the perceived lack of safety in the area around the clinic after dark. While about 76 percent of the directors of local WIC agencies perceived that access to their clinics is easy at current participation levels, this situation could change with increases in WIC participation overall, as well as with increases in participation by working women—a situation anticipated by many directors. About 58 percent of the directors indicated that they expect participation by working women to increase with the implementation of welfare reform. These expectations have already been realized in some states. Directors of local WIC agencies in Tennessee and Indiana reported that their states have already implemented some aspects of welfare reform and that the number of working women participating in WIC has increased. Federal, state, and local WIC officials explained that overall participation in WIC is likely to grow with the implementation of welfare reform because the perceived value of WIC benefits will increase as benefits from other assistance programs are lost. Moreover, the percentage of working women in WIC is likely to increase because welfare initiatives place a premium on moving the beneficiaries of these programs into the workforce. Increases in WIC participation could burden staff and space resources and hinder some agencies’ ability to continue to provide easy access to their clinics. In fact, many directors who rated access to their clinics as generally difficult cited a current lack of resources—staff and space—as the primary reason. Other local WIC agency directors reported similar staff and space constraints, noting that they were already working at full capacity and that one or more of their clinics had no room to accommodate more participants. For example, one director told us that his clinic was “already bulging at the seams” and that increases in participation would leave the clinic critically short of staff and space. Such shortages could limit working women’s access to WIC clinics. Women’s perceptions about WIC—such as the value of the program’s benefits to them as their income rises or the perceived stigma attached to obtaining benefits—were the limitations to participation most frequently cited by the directors of local WIC agencies. Another major factor limiting participation is that women may not be aware of their continued eligibility for WIC if they begin working while participating or if they are working and have not participated in WIC. Less frequently cited factors limiting participation in WIC include difficulties in reaching the clinic and long waits at the clinic. The directors of the local WIC agencies indicated that working women’s views of the WIC program may limit their participation, despite the agency’s efforts to make the program more accessible to them. Sixty-five percent of the directors considered the fact that working women lose interest in WIC benefits as their income rises as a significant factor limiting participation. For example, one agency director reported that women gain a sense of pride when their income rises and they no longer want to participate in the program. While working women may choose not to participate in WIC as their income increases, one local agency director noted that the eligible working women and their families who drop out of the program lose the benefit of nutrition information. The stigma some women associate with WIC—how they appear to their friends and co-workers as a recipient—is another significant factor limiting participation, according to about 57 percent of the local agency directors. One director said that when women go to work, they tend to change the way they view themselves—from thinking that they need assistance to thinking that they can support themselves. Another director told us that when her clinic was located in the county building, women were reluctant to come in because they were recognized as WIC recipients by county employees working elsewhere in the building. Another aspect of the perceived stigma associated with participating in WIC is sometimes referred to as the “grocery store experience.” The use of WIC vouchers to purchase food in grocery stores can cause confusion and delays for both the participant-shopper and the store clerk at the check-out counter and result in unwanted attention. For example, the directors of two local WIC agencies in Texas said that the state’s policy requiring participants to buy the lowest-priced WIC-approved items in the store contributes to the stigma, which limits participation. In Texas, a participant must compare the cost of WIC-approved items, considering such things as weekly store specials and cost per ounce, in order to purchase the lowest-priced items. Texas state WIC officials told us that this policy maximizes the food dollar, thus allowing benefits for a greater number of participants. Another director told us that a pilot project in which WIC-approved foods are purchased using a card that looks like a credit card could help reduce the stigma associated with shopping in the grocery store. The WIC card retains information on unused benefits and can be used at the check-out counter like an ordinary credit card. More than half of the directors indicated that a major factor limiting participation is that working women are not aware that they are eligible to participate in WIC. Local agency officials we spoke to in both California and Texas confirmed that many working women do not realize that they can work and still receive WIC benefits. Furthermore, these officials said that WIC participants who were not working when they entered the program but who later go to work often assume that they are no longer eligible for WIC and drop out. Other factors limiting WIC participation were difficulty in reaching the clinic, long waits at the clinic, or the lack of service during the lunch hour. For example, 41 percent of the directors of local WIC agencies indicated that difficulty in reaching the clinic—the unavailability or inadequacy of public transportation—was a limiting factor. Eighteen percent of the directors reported long waits as a limiting factor. About 7 percent reported that clinics not being open during the lunch hour was a factor limiting participation—not surprising since more than three-fourths of all agencies offer lunch hour appointments in at least one of their clinics. We provided a copy of a draft of this report to the USDA for review and comment. We met with Food and Consumer Service officials, including the Acting Director for the Supplemental Food Program Division, Special Nutrition Programs. The Service concurred with the accuracy of the report and provided several minor clarifications, which we incorporated as appropriate. To examine the accessibility of WIC for working women and the factors limiting their participation, we conducted a mail survey of 375 directors of local WIC agencies, visited 18 clinics in four states, and met with USDA headquarters officials and state agency officials responsible for WIC. We conducted our review from March through September 1997 in accordance with generally accepted government auditing standards. We are sending copies of this report to the Chairman, Senate Committee on Agriculture, Nutrition, and Forestry; the Chairman, House Committee on Agriculture; and the Secretary of Agriculture. We will also make copies available to others upon request. If you have any questions about this report, please contact me at (202) 512-5138. Major contributors to this report are listed in appendix IV. We conducted our review to obtain information on the extent to which the benefits of the Special Supplemental Nutrition Program for Women, Infants, and Children (WIC) are accessible for eligible working women and their children. Specifically, we (1) identified actions taken by local WIC agencies to increase access to WIC benefits for working women; (2) obtained agency directors’ assessment of their clinics’ accessibility; and (3) identified factors limiting participation in the program. We conducted a mail survey of 375 randomly selected local WIC agencies from a nationwide list of 1,816 local agencies provided to us by the U.S. Department of Agriculture’s (USDA) Food and Consumer Service (FCS). The survey asked the directors of the local agencies to provide information on (1) the strategies they have implemented to increase the accessibility of their clinics, (2) their views on the overall accessibility of their clinics for working women, and (3) factors that limit participation by working women. In addition, we asked directors to provide descriptive information on their agency, such as the number of clinics and participants. (See app. III for a complete list of questions.) We used the survey responses to develop overall results that are representative of those that would be obtained from all local agencies nationwide. For an explanation of the survey results and how they can be used, see appendix II. Appendix III presents the aggregated responses to our survey. To better understand the problems and limitations affecting working women’s access to WIC benefits, we visited local WIC agencies and interviewed agency staff in several states. We judgmentally selected the sites visited to obtain states and agencies with high levels of participation and WIC funding and to provide geographic diversity. In addition, we discussed the selection of local WIC agencies with state agency officials, who identified unique agency features for consideration in selection, such as rapid growth in participation or migrant workers’ participation. Table I.1 lists the local WIC agencies that we visited. Community Medical Centers, Inc. Planned Parenthood, Orange & San Bernadino Counties Public Health Foundation Enterprises, Inc. Santa Barbara County Health Care Services Community Progress Council, Inc. In addition, we interviewed state agency officials and FCS headquarters and regional officials to obtain information on overall program operations, policies, and guidance. We provided a draft copy of this report to FCS for review and comment. We performed our work from March through September 1997 in accordance with generally accepted government auditing standards. In developing the questionnaire for our mail survey, we conducted 12 pretests with directors of local WIC agencies in four states, the District of Columbia, and one Indian tribal organization. Each pretest consisted of a visit to a local WIC agency by two GAO staff, except for a pretest by telephone with one director. During these visits, we attempted to simulate the actual survey experience by asking the local agency director to fill out the survey. We interviewed the director to ensure that (1) the questions were readable and clear, (2) the terms were precise, (3) the survey did not place an undue burden on local agency directors, and (4) the survey appeared to be independent and unbiased in its point of view. We also obtained reviews of our survey from managers at FCS. In order to maximize the response to our survey, we mailed a pre-notification letter to respondents 1 week before we mailed the survey. We also sent (1) a reminder postcard 1 week after the survey, (2) a reminder letter to nonrespondents 2 weeks after the survey, and (3) a replacement survey for those who had not responded 31 days after the survey. We received survey responses from 350 of the 375 local agencies in our sample. This gave us a response rate of 93 percent. After reviewing these survey responses, we contacted agencies by phone to clarify answers for selected questions. Since we used a sample (called a probability sample) of 375 of the 1,816 local WIC agencies to develop our estimates, each estimate has a measurable precision, or sampling error, which may be expressed as a plus/minus figure. A sampling error indicates how closely we can reproduce from a sample the results that we would obtain if we were to take a complete count of the universe using the same measurement methods. By adding the sampling error to and subtracting it from the estimate, we can develop upper and lower bounds for each estimate. This range is called a confidence interval. Sampling errors and confidence intervals are stated at a certain confidence level—in this case, 95 percent. For example, a confidence interval, at the 95-percent confidence level, means that in 95 out of 100 instances, the sampling procedure we used would produce a confidence interval containing the universe value we are estimating. Table II.1 lists the sampling errors for selected percentages. In addition to the sampling errors reported above, one of our analyses required a ratio estimate in order to calculate sampling errors. We report that 24 percent of participants nationwide are served by local agencies that have no regular hours beyond the hours of 8 a.m. to 6 p.m., that is, participants have no access to Saturday, evening, or early morning hours. The sampling error associated with this estimate is 8 percent. Therefore, our estimate of 24 percent ranges between 16 and 32 percent, using a 95-percent confidence level. In estimating the number of participants without access to hours beyond the routine workday, we made conservative assumptions that lowered the estimate. For example, if an agency had five clinics and only one with extended hours, we assumed that all of the agency’s participants had access to the extended hours, even though this clinic does not serve all of the participants. Since we did not collect data on the number of participants at each clinic, we cannot determine the extent to which our estimates might be affected by these conservative assumptions. Robert E. Robertson, Associate Director Judy K. Hoovler, Evaluator-in-Charge D. Patrick Dunphy Fran A. Featherston Renee McGhee-Lenart Carol Herrnstadt Shulman Sheldon H. Wood, Jr. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO provided information on the extent to which Special Supplemental Nutrition Program for Women, Infants, and Children (WIC) program benefits are accessible to eligible working women, focusing on: (1) the actions taken by local WIC agencies to increase access to WIC benefits for working women; (2) asking the local WIC agency directors' opinions on the accessibility of their clinics; and (3) factors that limit program participation. GAO noted that: (1) the directors of local WIC agencies have taken a variety of steps to improve access to WIC benefits for working women; (2) the two most frequently cited strategies are: (a) scheduling appointments instead of taking participants on a first-come, first-served basis; and (b) allowing a person other than the participant to pick up the food vouchers or checks, as well as nutrition information, and to pass these benefits on to the participant; (3) these strategies focus on reducing the amount of time at, or the number of visits to, the clinic; (4) although three-fourths of the local WIC agencies offer appointments during the lunch hour, only about one-tenth offer Saturday appointments, about one-fifth offer early morning appointments, and less than half offer evening appointments; (5) collectively, at least one-fourth of the participants do not have access to any clinic hours outside of the regular work day; (6) 76 percent of the directors of local WIC agencies believed that their clinics are reasonably accessible for working women; (7) in reaching this conclusion, the directors considered their hours of operation, the amount of time that participants wait for service, and the ease with which participants are able to get appointments at the desired time; (8) although most directors were generally satisfied with their clinics' accessibility and had made changes to improve access, 9 percent of the directors still rated accessibility as a problem; (9) 14 percent of the directors rated accessibility as neither easy nor difficult, and 1 percent responded that they are uncertain; (10) the directors of local WIC agencies identified several factors that limit WIC participation by working women; (11) the factors most frequently cited reflected the directors' perceptions of how women view the program; (12) specifically, the directors told GAO that women do not participate because they: (a) lose interest in the program as their income increases; (b) perceive a stigma attached to receiving WIC benefits; or (c) see the program as limited to those who do not work; and (13) directors less frequently identified other factors--such as the lack of adequate public transportation and long waits at clinics--as also limiting WIC participation by working women.
DOD defines “common military training” as non-occupational, directed training that sustains readiness, provides common knowledge, enhances awareness, reinforces expected behavioral standards or obligations, and establishes a functional baseline that improves the effectiveness of DOD and its constituent organizations. Common military training is required for all servicemembers. DOD Instruction 1322.31, Common Military Training (CMT), identifies 11 common military training requirements. Legislation, executive orders, and DOD guidance (directives or instructions) establish these 11 requirements. We use the term “common military training” to refer to the 11 requirements referred to in DOD Instruction 1322.31. See appendix II for a list of the 11 common military training requirements. Each of the military services may require additional individual training— for example, training for chemical, biological, radiological, and nuclear defense; marksmanship qualification; and physical fitness—that is bundled with common military training. For example, the Army provides “mandatory training,” which is required for all Army soldiers regardless of component (unless otherwise noted), branch or career field, or rank or grade. Similarly, the Marine Corps requires “annual training,” which is required for Marines regardless of military occupational specialty or rank or grade or component, unless otherwise exempted or waived. The Navy conducts “general military training,” which applies to all uniformed active and reserve component Navy personnel. Finally, the Air Force conducts “ancillary training,” which is universal training, guidance, or instruction, regardless of specialty. Common military training makes up a portion of mandatory training requirements that all DOD personnel must complete. For example, the Navy estimated that common military training comprises 66 percent, on average, of the time spent on mandatory training requirements. The Air Force estimated in 2016 that common military training comprises 38 percent of the time dedicated to mandatory training requirements. The Army, Marine Corps, Navy, and Air Force each have about 19 mandatory training requirements. Common military training comprises more than half of these mandatory training requirements for most of the military services. See appendix III for a list of common military training and mandatory training requirements. Each common military training topic has a lead proponent. DOD defines a common military training lead proponent as the Office of the Secretary of Defense or DOD component, agency, or office responsible for the oversight, management, administration, and implementation of a specific common military training core curriculum. Common military training lead proponents provide policy on training topics; the military services provide and execute the training. For example, the Office of the DOD Chief Information Officer is the lead proponent for Cybersecurity. DOD and the military services have made efforts to review and validate the need for the current common military training requirements. DOD, for example, established the Common Military Training Working Group in February 2015 to, among other things, review and validate common military training. DOD Instruction 1322.31 requires the Common Military Training Working Group to review and validate common military training requirements periodically. The Acting Under Secretary of Defense for Personnel and Readiness signed the Common Military Training Working Group Charter in December 2016. According to an Office of the Deputy for Force Training official, the working group held its first organizational meeting in January 2017 and a second meeting in February 2017 at the Advanced Distributed Learning Office, at which it received a briefing on its learning science and technology portfolio. The working group’s charter states that it will review common military training requirements for validity. The charter further states that the working group’s goal is to combine, reduce, and eliminate redundant or obsolete common military training. According to an Office of the Deputy for Force Training official, validation would include a review of existing legislation, executive orders, DOD guidance, and DOD policies and guidance to establish common military training requirements for the military services. As of March 2017, the working group had not yet begun to review and validate training, according to the Office of the Deputy for Force Training. However, according to that official, the office is in the process of developing future working group meeting agendas to discuss topics such as validating training requirements. The official said that the working group would need to begin reviewing and validating the antiterrorism training topic because the office believes that it is no longer statutorily required. In addition, our review of the working group’s initial plans to develop meeting agendas and to review and validate the antiterrorism training requirements demonstrates that some future actions to review common military training may be forthcoming. In addition to participating in the Common Military Training Working Group, some of the military services have made efforts to review and validate common military training. Although DOD Instruction 1322.31 does not require the services to independently review and validate common military training core curriculums, some military service officials we interviewed indicated that common military training requirements are generally accepted as validated requirements because they appear in DOD guidance. Each service has published guidance that contains information on what steps it employs to review and validate mandatory training requirements. Service guidance also contains information on the offices, committees, or steering groups that play a key role in reviewing and validating mandatory training requirements. Table 1 below shows the services’ published guidance containing the requirements to review and validate mandatory training, which also includes common military training requirements. According to officials, the Navy and Marine Corps annually review and validate mandatory training requirements. A Navy official in the Office of the Chief of Naval Operations told us that the Chief of Naval Operations must determine, validate, and assign annual Navy-wide mandatory training requirements. The official said that the annual review process for validating mandatory training requirements passes through several administrative levels—including action officer working groups and a flag level officer board that meets quarterly to discuss training issues and recommend improvements—to shape training for the next fiscal year. In July 2016, Navy officials published information on the results of their review and validation of mandatory training requirements for fiscal year 2017. According to a Marine Corps official in the Training and Education Command, the office, in collaboration with the Commanding General, Training and Education Command, is responsible for reviewing and validating annual training requirements. The official said that Marine Corps Bulletin 1500, which is the Marine Corps’ guidance for annual training and education requirements, serves as the annual validation for mandatory training. The most recent edition of Marine Corps Bulletin 1500 was published on September 8, 2016, and contains an approved list of mandatory training, including common military training requirements. According to an official working for the Deputy for the Collective Training Division, Directorate of Training, Headquarters, Department of the Army (G-3/5/7), mandatory training requirements are reviewed and validated biennially or as directed by the Deputy Chief of Staff (G-3/5/7). The Training General Officer Steering Committee provides an enterprise-wide vetting of training requirements and recommendations to the Deputy Chief of Staff (G-3/5/7). The official said that the Deputy Chief of Staff (G- 3/5/7) approves and publishes mandatory training requirements. The list of mandatory training requirements is published in Army Regulation 350- 1. Finally, according to Air Force officials, the Air Force reviewed and validated existing mandatory training requirements during its October 2016 training review. The Air Force Learning Committee meets annually to review new mandatory training requirements, and Air Force guidance states that the Air Force Learning Division monitors the overall training footprint for that service’s total force. According to an official in the Office of the Deputy Chief of Staff for Personnel, the Air Force reviews the service’s common military training courses to ensure that they are meeting DOD requirements. DOD and the military services have actions planned to evaluate common military training. DOD directed the Common Military Training Working Group to evaluate the effectiveness of common military training in February 2015. Specifically, DOD Instruction 1322.31 calls for the working group to periodically evaluate common military training for effectiveness, among other things, and DOD Directive 1322.18 states that it is DOD’s policy to assess military training throughout the department. The Common Military Training Working Group charter directs the group to review common military training requirements for effectiveness. However, as of March 2017, the group had not yet begun to evaluate training. A former official in the Office of the Deputy for Force Training said that evaluation of training was an important but difficult task, and discussed two approaches that he intended the working group to consider to evaluate whether training is effective: (1) measuring whether individuals have completed training; and (2) assessing the outcome of training from the trainer’s perspective. We found that some military service boards and committees have made independent efforts to assess the effectiveness of their respective mandatory military training courses, including common military training. For example, in 2015 the Army Mandatory Training Task Force evaluated the accessibility and effectiveness of current training materials. The charter of the Navy Planning Board for Training calls for it to review the impact of the annual requirements. Air Force Instruction 36-2201 directs the Air Force Learning Committee to monitor the mandatory training impact and improve the focus, currency, and relevancy of its curriculums and training. According to Navy officials, the Navy Planning Board for Training completed a review of the Command Indoctrination Program for fiscal year 2015, which led to a recommendation to eliminate six training topics: Navy Right Spirit Campaign and Alcohol Awareness, Suicide Awareness, Personal Financial Management, Operational Risk Management, Prevention of Sexual Harassment and Sexual Assault, and Antiterrorism and Force Protection. According to Navy officials, these topics were redundant under the Command Indoctrination Program and were already required as annual training by most Navy commands. Some of the 11 common military training proponents have also made independent efforts to assess the effectiveness of their respective courses. Officials from 6 proponents with whom we spoke stated that they had previously made efforts to assess the effectiveness of their mandatory training requirements; officials from 1 proponent stated that they would conduct an assessment in the future; and officials from the remaining 4 stated that they had not evaluated training. For example, the Sexual Assault Prevention and Response Office conducted surveys in 2010, 2012, 2014, and 2016 to assess the effectiveness of the sexual assault and sexual harassment training received by servicemembers, according to an official from that office. The Defense Suicide Prevention Office states in its strategic plan that it will evaluate the efficacy of suicide prevention programs. The DOD Strategy for Suicide Prevention states that DOD will use evidence-based training curriculums and periodically review, evaluate, and update these curriculums. Other proponents have taken steps to assess the amount of knowledge that individuals gain from training in order to make adjustments as needed to the training courses offered. For example, the Combating Trafficking in Persons training contains a survey at the end of the computer-based version of the course. A proponent official said that the results of the survey data are used to make updates to training based on participant feedback. In addition, according to an official in the Defense Human Resources Activity, the Status of Forces Survey of Active Duty Members is another source used for assessing and updating the Combating Trafficking in Persons training. The military services offer varying degrees of flexibility for providing course delivery methods that allow individuals to complete mandatory training requirements, including common military training, according to guidance we reviewed and servicemembers’ perspectives we obtained. DOD Instruction 1322.31 requires the secretaries of the military departments to work with the appropriate common military training lead proponents, the Chairman of the Joint Chiefs of Staff, and appropriate DOD and component leads to optimize available training time and increase training and education delivery flexibility, share best practices to effectively educate and train servicemembers, and standardize the common military training core curriculum to reduce the burden on each military service. The DOD Instruction does not state which method of delivery the military services must use to complete training requirements. For example, according to an official in the Office of the Deputy Chief of Staff (G-3/5/7), current policy states that all mandatory training requirements must have alternative methods of delivery that do not rely solely on on-line, computer-based delivery. Some services’ guidance provides instruction on course delivery methods that individuals could use and commanders could apply at their discretion to complete mandatory training requirements. For example, Marine Corps Bulletin 1500 cites the Marine Corps’ distance learning system and commander-led unit training as delivery methods that may be considered. According to OPNAV Instruction 1500.22H, the Navy offers command-discretion training in which commanders have multiple options for topic delivery, such as locally generated or standardized training products, and, in cases of complete discretionary training, local commanders may determine when and how training is provided. Furthermore, according to Air Force Instruction 36-2201, training may be accomplished through a variety of methods, including formal courses, mass briefings, advanced distributed learning, and one-on-one instruction. Servicemembers with whom we spoke held a range of differing opinions about training flexibilities and course delivery methods offered by their respective services. The text boxes below contain a series of selected comments from servicemembers with whom we spoke who provided perspectives on their experiences with various aspects of training. The comments reflect opinions from servicemembers in 12 active units who have been deployed in the past 5 years, from across the services. Some military service officials told us that they prefer computer-based training for some topics because it allows individuals to complete requirements in less time than classroom courses, which may require several hours of instruction. As shown in the text box below, military personnel we interviewed identified some advantages and disadvantages to computer-based training for servicemembers. Additionally, military service personnel we interviewed said that servicemembers prefer computer-based training because it allows them to complete training requirements in a shorter period and avoid hours of classroom instruction. However, personnel at other units stated that there were disadvantages to computer-based training, such as losing the impact that unit leaders provide, having to repeat the same training subject each year, and not retaining as much information as they would from discussions in classroom-style courses. Also, servicemembers in the 2nd Battalion, 6th Marines, at Camp Lejeune, North Carolina, and on the Harry S. Truman expressed concerns that units lack a sufficient number of computers. According to estimates provided by service officials, it would take an individual less than 20 hours to complete all the common military training. However, an official in the Office of the Deputy Chief of Staff (G-3/5/7) said that the time it takes soldiers to complete either computer-based or face-to-face training varies greatly based on such factors as computer availability, pre-test options, instructors, and audiences. Therefore, it is difficult to estimate averages. One servicemember anecdotally remarked that completion of common military training takes about 8 hours, while another said it takes from 1 to 3 hours, per month. Table 2 shows the military services’ estimates for completing common military training courses, and the text box that follows provides perspectives on training time from servicemembers with whom we spoke. The military services are also taking initial steps toward reducing training time for some mandatory training requirements, including common military training, by updating their guidance, combining similar training topics, and eliminating redundancies. For example, according to an Army official, the Army is currently updating Army Regulation 350-1, which will include guidance to increase commander flexibility and modify the tracking of mandatory training. According to Navy guidance from July 2016, the Navy continued to reduce mandatory training requirements in fiscal year 2017 and placed additional control at the discretion of local command leadership. The Air Force issued a memo in August 2016 outlining steps to address training demands such as establishing a task force to streamline training, among other things, and focusing on computer-based training requirements and their effect on the force. Some air wings at Air Combat Command and the Air Force Global Strike Command recently issued guidance that allows unit commanders to provide some mandatory training courses in a briefing format to accomplish training and enhance efficiencies. Most recently, the Marine Corps published an updated version of its mandatory training requirements in the Marine Corps Bulletin in September 2016. In addition to updating guidance, a Marine Corps Training and Education Command official noted that the Marine Corps has reduced mandatory training requirements since 2015 by an estimated 7.0 hours by consolidating stand-alone classes addressing Child Abuse, Domestic Violence, Combat Operational Stress Control, Substance Abuse, Family Advocacy, and Suicide Prevention with the Unit Marine Awareness and Prevention Integrated Training. According to Marine Corps officials, the Marine Corps’ 2017 transition to leader-led, discussion-based training for specific annual training requirements could reduce the time needed to conduct training, as it takes less time to refresh Marines on topics that were covered in detail during entry-level training. We are not making recommendations in this report. In written comments reprinted in appendix IV, DOD concurred with the draft of this report. DOD also provided technical comments, which we incorporated as appropriate. We are sending copies of this report to the appropriate congressional committees; the Secretary of Defense; the Under Secretary of Defense for Personnel and Readiness; the Secretaries of the Army; Air Force, and Navy; and the Commandant of the Marine Corps. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact me at (202) 512-5431 or russellc@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix V. To describe what efforts DOD and the services have made to review and validate common military training requirements, we collected and reviewed DOD and service-level guidance to determine the training required to complete common military training and the process for reviewing, validating, consolidating, and eliminating common military training. Specifically, we analyzed DOD Directive 1322.18, Military Training (Jan. 13, 2009) (incorporating change 1, effective Feb. 23, 2017); DOD Instruction 1322.31, Common Military Training (CMT) (Feb. 26, 2015) (incorporating change 1, Apr. 11, 2017); Army Regulation 350- 1, Army Training and Leader Development (Aug. 19, 2014); draft Army Regulation 350-1 (currently under review); Marine Corps Bulletin 1500, Annual Training and Education Requirements (Sept. 8, 2016); Marine Administrative Message 188/17, Modifications to MCBUL 1500 Annual Training Requirements (Apr. 17, 2017); Naval Administrative Message 166/16, FY-17 General Military Training Schedule (July 26, 2016); Office of the Chief of Naval Operations Instruction 1500.22H, General Military Training Program (Sept. 3, 2015); and Air Force Instruction 36-2201, (Sept. 15, 2010) (incorporating through change 3, Aug. 7, 2013). We interviewed military service officials from the Army, Marine Corps, Navy, and Air Force to determine how they review and validate common military training and document individuals’ completion of common military training. We also interviewed DOD training proponents to discuss how they develop and disseminate common military training for the military services and their processes for reviewing and validating common military training. To describe steps that DOD and the services have taken to evaluate the effectiveness of common military training requirements, we collected and reviewed DOD and service-level guidance explaining the process to evaluate common military training. We interviewed DOD and service-level officials from the Army, Marine Corps, Navy, and Air Force to discuss their methods to evaluate common military training. We interviewed all 11 DOD training proponents to discuss how they have determined the effectiveness of their training topics. We did not evaluate the effectiveness of the common military training because it was beyond the scope of our review, but rather focused on identifying examples of efforts in which the services and proponents have taken steps to assess the effectiveness of training. To describe the flexibilities that the services offer regarding course delivery methods, steps they are taking to consolidate training and reduce training time, and their perspectives on various aspects of training, we collected service-level training guidance that explains the level of flexibility units have to complete common military training. We interviewed unit commanders and training managers from a non-generalizable sample of 12 units from the Army, Marine Corps, Navy, and Air Force. We worked with the services to identify units in active status that had deployed to Iraq and Afghanistan within the past 5 years and to identify a mix of officers and enlisted personnel within the selected units. We also worked with service-level officials to identify unit commanders and training managers to interview, and during these interviews we discussed available training flexibility and determined the delivery options and the amount of time spent on common military training. Although not generalizable, the interviews we conducted with personnel in these units provided examples of the training flexibilities available to commanders. These units were as follows: 3rd Squadron, 61st Cavalry Regiment, Bravo Troop, Fort Carson, 3rd Squadron, 61st Cavalry Regiment, Charlie Troop, Fort Carson, Delta Company (D Co) 1st Battalion, 501st Aviation Regiment, 1st Armored Division, Combat Aviation Brigade, Fort Bliss, Texas Headquarters and Headquarters Company, 1st Brigade Combat Tem, 1st Armored Division, Fort Bliss, Texas 2nd Law Enforcement Battalion, II Marine Expeditionary Force, Camp 2nd Battalion, 6th Marines, II Marine Expeditionary Force, Camp USS Harry S. Truman, Aircraft Carrier 75 94th Fighter Squadron, Langley Air Force Base, Virginia 1st Aircraft Maintenance Squadron Support Section, Langley Air 1st Maintenance Squadron, Langley Air Force Base, Virginia 1st Maintenance Squadron Unit Training Manager, Langley Air Force Base, Virginia 27th Fighter Squadron, Langley Air Force Base, Virginia We interviewed cognizant officials at various DOD headquarters offices, including the Office of the Under Secretary of Defense for Personnel and Readiness, Deputy Under Secretary of Defense for Force Education and Training, Office of the Deputy for Force Training; Joint Staff; Deputy Chief of Staff, Army (G-3/5/7); U.S. Army Forces Command; U.S. Army Training and Doctrine Command; U.S. Army Reserve Command; Marine Corps Training and Education Command; U.S. Marine Corps Forces Command; Assistant Secretary of the Navy (Manpower and Reserve Affairs), Office of the Chief of Naval Operations; Naval Education and Training Command; U.S. Fleet Forces Command; Air Force Deputy Chief of Staff for Manpower and Personnel; and Air Force Air Combat Command. As shown in table 3, we also conducted interviews with the lead proponents, located within DOD offices, for each common military training topic. We conducted this performance audit from May 2016 to May 2017 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Table 4 below presents a list of the 11 common military training requirements. Of these, 5 are mandated by statute or executive order. Table 5 below presents a summary of common military training and the military services’ mandatory training requirements that fulfill common military training requirements. Some mandatory training courses fulfill the requirements for multiple common military training requirements. For most military services, common military training comprises more than half of their mandatory training requirements. Table 6 below summarizes the services’ mandatory training requirements that do not fulfill common military training requirements. These requirements fall under the services’ definitions of mandatory training requirements. The mandatory training requirements listed below are common across a service. The table does not include additional training that the services may require for specific groups of servicemembers. In addition to the contact named above, Sally L. Newman (Assistant Director), Thomas Corless, Michele Fejfar, Latrealle Lee, Amie Lesser, Shahrzad Nikoo, Carol Petersen, Vikki Porter, and Cheryl Weissman made key contributions to this report.
DOD requires all servicemembers to complete training that provides common knowledge and skills. Common military training across the military services includes topics such as Suicide Prevention, Cybersecurity, and Sexual Assault Prevention and Response. DOD has identified a need to reduce training requirements because of concerns from the services about the amount of time it takes to complete training, and in 2012 asked the RAND Corporation to examine the services' mandatory training—which includes common military training—requirements and options for standardization. RAND recommended, among other things, that DOD consider adopting standardized, computer-based training and issue a single DOD directive that lists all requirements. House Report 114-537 accompanying a bill for the National Defense Authorization Act for Fiscal Year 2017 included a provision for GAO to examine the military services' actions to assess mandatory military training requirements. This report describes (1) efforts that DOD and the services have made to review and validate common military training requirements; (2) steps that DOD and the services have taken to evaluate the effectiveness of these requirements; and (3) flexibilities the services offer regarding course delivery methods, steps they are taking to consolidate and reduce training time, and their perspectives on various aspects of training. GAO reviewed DOD and military service training guidance and interviewed officials at DOD headquarters and military service offices. The Department of Defense (DOD) and the military services have made recent efforts to review and validate common military training requirements. DOD established the Common Military Training Working Group in February 2015 to, among other things, review and validate common military training requirements. In December 2016 the Acting Under Secretary of Defense for Personnel and Readiness signed the Common Military Training Working Group Charter, which states that the working group will review common military training requirements for validity. According to an Office of the Deputy for Force Training official, the working group held its first meeting in January 2017 and a second meeting in February 2017. According to that official, the Office of the Deputy for Force Training is in the process of developing future working group meeting agendas to discuss topics such as validating training requirements. In addition, some of the military services have taken steps to review and validate common military training. For example, according to officials, the Navy and Marine Corps annually review and validate mandatory training requirements, while the Army reviews and validates mandatory training requirements biennially or as directed. According to Air Force officials, the Air Force reviewed and validated existing mandatory training requirements during its October 2016 training review. DOD has directed the Common Military Training Working Group to evaluate the effectiveness of common military training requirements. DOD Instruction 1322.31 calls for the working group to periodically review common military training and evaluate it for effectiveness, among other things, and the working group's charter states that it will review common military training requirements for effectiveness. In addition, some DOD proponents responsible for managing a specific common military training core curriculum, as well as the military service boards, have made independent efforts to assess the effectiveness of their respective mandatory military training courses, including common military training. For example, in 2015 the Army Mandatory Training Task Force evaluated the accessibility and effectiveness of current training materials. The military services offer varying degrees of flexibility for providing course delivery methods that allow individuals to complete mandatory training requirements, including common military training. For example, training guidance provided by the Marine Corps, Navy, and Air Force indicates that the services may rely on a variety of delivery methods for training, including distance learning systems, formal courses, and one-on-one instruction. According to estimates provided by service officials, it would take an individual less than 20 hours to complete all common military training requirements. Nevertheless, the military services are taking steps to reduce training time for some mandatory training requirements by updating their guidance, combining similar training topics, and eliminating redundancies. For example, the Air Force has reviewed all of its training topics to determine which ones to streamline or consolidate. GAO interviewed servicemembers from across the services who informally presented a range of perspectives regarding various aspects of training.
Tritium, which makes possible smaller, more powerful nuclear weapons, decays at a rate of 5.5 percent per year. Therefore, for nuclear weapons to be capable of operating as designed, the tritium in the weapons must be periodically replaced. DOE used to produce new tritium in its reactors at the Savannah River Site, but the last of these reactors was shut down in 1988 because of safety and operational problems. DOE currently has no tritium production capability, although the Department has been able to meet its requirements for tritium by reusing material recovered from dismantled weapons. In order to meet currently planned requirements for tritium, a new production capability must be available in 2005. To accomplish this, DOE is pursuing a dual-track program to select the primary production source. The first track is based on using a commercial light water reactor to produce tritium. Target rods containing lithium would be placed in the reactor, and during the reactor’s normal operations, some of the lithium would be turned into tritium. Once removed from the reactor, the target rods would be transported to the Tritium Extraction Facility, where the tritium would be removed. The second track involves building an accelerator as the primary producer of tritium. This device accelerates protons (particles within an atom that have a positive electrical charge) to nearly the speed of light. The protons are crashed into tungsten, releasing neutrons (particles within an atom that have no electrical charge), which can be used to change helium into tritium. As currently envisioned, this process would not involve the Tritium Extraction Facility. DOE’s current plan is to choose one of the two tracks in late 1998. If the commercial-light-water-reactor track is chosen, the accelerator will be pursued to the point of establishing an engineering design for it, but it will not be built. If the accelerator track is chosen, the accelerator will be built and operated and, as a backup, all aspects of the commercial light water reactor option will be completed—with the exception of the actual production of tritium. The target rods will be produced, agreements with utilities for the use of their reactors will be signed, and the Tritium Extraction Facility will be built. Thus, under both tracks, DOE intends to build the Tritium Extraction Facility. Construction of the facility—to be managed by the Commercial Light Water Reactor Project Office at the Savannah River Site—is currently estimated to cost $383.4 million and is scheduled for completion in 2005. The Tritium Extraction Facility project completed the conceptual design phase in October 1997. The preliminary design, currently being developed, is scheduled to be completed in June 1998. The conceptual design for the Tritium Extraction Facility was reviewed by three teams—the “Red Team,” the “Independent Review Team,” and the “Formal Design Review Team.” Although there is no requirement for such reviews, they were requested by DOE headquarters’ Office of Commercial Light Water Reactor Production and the Project Office at Savannah River to increase their confidence in the conceptual design of the facility before proceeding to the preliminary design phase. All teams reviewed drafts of the conceptual design and/or the conceptual design report. Table 1 shows how many and what type of participants each team had, what the team was chartered to do, and when the review was performed. Two of the three teams that reviewed DOE’s conceptual design for the Tritium Extraction Facility made overall comments in their final reports. The Red Team and the Formal Design Review Team expressed a favorable opinion overall of the facility’s design and the related documentation. According to the Red Team, the conceptual design’s scope, cost, and schedule are appropriate; the technical concept and approach are sound; all major risks have been identified; and the building is constructible and, in general, appears to comply with DOE’s current requirements. The Formal Design Review Team reported that it did not identify any significant items that could not be corrected with three documents due to be completed after the team’s review—the Facility Design Description, the System Design Description, and the Conceptual Design Statement of Work. The Independent Review Team did not make any overall comments on the conceptual design. In addition to the overall comments made by two of the review teams, all three teams made a number of specific comments. The Red Team had 34 comments (see app. I for a listing of the Red Team’s major comments), and the Independent Review Team had 60 comments (see app. II for a listing of the Independent Review Team’s major comments). The Formal Design Review Team made 691 specific documentary and technical comments on the conceptual design and related documents—none of which it considered to be major impediments to the design and construction of the Tritium Extraction Facility. The specific comments made by all three review teams covered a wide range of topics, including the design of specific systems, the design and construction schedule, life-cycle costs, the method of contracting for the design and construction, and the level of detail in the supporting documentation and in the conceptual design report. Comments that the review teams considered to be significant and that we believe cover issues that could affect the success of the project related to the design of the remote handling and tritium extraction processes, the need to include contingencies in the schedule, and the level of detail in the conceptual design report. The remote handling system for the Tritium Extraction Facility, the means by which nearly all facility processes and maintenance, including moving tritium target rods and opening them, will be controlled from a separate (remote) room. The extraction process involves heating the target rods in a furnace and removing the tritium and other gases from them. Project officials do not consider design and construction of these systems to be high-risk, but they do believe that they are the highest-risk tasks involved in the project. The Red Team reported that the remote handling and tritium extraction processes include risks that need to be addressed in the near term. The team found that a plan to mitigate the risks was not evident and that the subsystems to manipulate and open the target rods had not been demonstrated. The team believed that the time and cost to engineer and develop the processes would be greater than the estimates in the conceptual design report. Similarly, in a comment it deemed “significant,” the Independent Review Team stated that the target rod handling process was overly complex. The team proposed an alternative method and suggested that it be discussed in the conceptual design report. The Red Team and the Independent Review Team consider DOE’s actions and responses to the comments on the remote handling and tritium extraction processes to be generally adequate. According to Red Team members and the chairman of the Independent Review Team, much has changed since their reviews were conducted. Design alternatives have been developed and changes have been made in the conceptual design report that have satisfied the intent of the comments. The Independent Review Team indicated a need to consider contingencies, to provide allowances for unforeseen delays, in the schedule—just as they are addressed in cost estimates. The chairman of the team told us that it was specifically concerned with the plans for a mock-up of the remote handling process and a prototype of the tritium extraction furnace—believing that any problems with them could delay the project overall because the tasks run concurrently with the development of the detailed design. The chairman explained that although the conceptual design report now contains more detail on the project’s schedule, it still does not include contingencies. Because he believes this feature to be very important, he considers DOE’s response to this comment to be inadequate. DOE officials informed us that they believe there is no need for the schedule to include contingencies. The Tritium Extraction Facility’s schedule is based on a 5 day per week, 8 hour per day work schedule. The option of working multiple shifts and/or weekends, as necessary, offers adequate flexibility to respond to schedule issues. All three review teams made numerous comments suggesting adding additional information and detail to the conceptual design report and related documents. The suggestions concerned requirements, design detail, equipment, analyses, schedules, risks, and planned operations. The Red Team concluded that the conceptual design package was insufficient to permit an architect engineering firm to independently proceed with the preliminary design. Since the three teams reviewed the draft conceptual design, the Project Office has provided considerable additional information in the issued conceptual design report. Furthermore, according to program officials, DOE never intended for an architect engineering firm to develop the preliminary design independently, but rather for the firm to work with the Project Office to develop the design. After reviewing the final conceptual design report, the Red Team and the chairman of the Independent Review Team consider their comments about the level of detail to be resolved. The Formal Design Review Team has yet to review DOE’s actions subsequent to its review. Although one of the review teams was chartered by DOE headquarters and two were chartered by the Savannah River Project Office, the purpose of obtaining the independent reviews of the conceptual design was similar in all three cases—to provide confidence in the adequacy of the Tritium Extraction Facility’s conceptual design. Nevertheless, there were no uniform guidelines established for these reviews, and the comments made by each of the review teams were handled differently. In addition, DOE and the Project Office did not reach closure with any of the review teams prior to initiating the preliminary design phase. Neither DOE nor the Project Office at the Savannah River Site initially responded to all of the Red Team’s comments. In April 1997, on the basis of a briefing provided by the Red Team, DOE headquarters selected 10 items that it believed to be most important and that required action before the beginning of the preliminary design phase. On October 31, 1997, the Project Office sent a letter to DOE headquarters describing the actions taken in response to the comments selected by headquarters and one other item added by DOE officials at Savannah River. For 8 of the 11 comments, the Project Office analyzed the comments and formally documented its responses. The Project Office took no action on two of the comments, deferring action until later in the project. The Project Office disagreed with one comment. On December 2 and 3, 1997 (after DOE’s October 31, 1997, approval to proceed to the preliminary design phase), DOE headquarters officials took a team composed of three former Red Team members to the Savannah River Site to determine what had been done in response to all 34 comments contained in the team’s report. The Project Office prepared a list of the actions taken, and the three members of the Red Team concluded that, overall, the Project Office had been responsive to the comments. They concluded that the conceptual design had been completed with the level of detail required by DOE orders and concurred with the decision to proceed with the preliminary design. DOE project officials informed us that they intend to also have a panel similar to the Red Team review the project’s design at the conclusion of the preliminary design phase. The Project Office handled the Independent Review Team’s comments differently. On July 31, 1997, prior to the initiation of the preliminary design phase, the Project Office formally responded to all 60 of the team’s comments. Neither DOE nor the Project Office transmitted the responses to members of the Independent Review Team, and their review of the responses was not solicited. However, we asked the chairman of the Independent Review Team to review the Project Office’s responses. The chairman considers the responses to 55 of the comments to be adequate and to 5, inadequate. One of these five comments involves the project’s schedule, as discussed earlier. The chairman does not consider the other four to be significant. The Project Office handled the Formal Design Review Team’s comments in a different manner still. By October 31, 1997, the Project Office had reviewed each of the Formal Design Review Team’s 691 comments and recommended 454 for closure—that is, that the Project Office’s actions satisfied the comments. A number of the comments recommended for closure (about 12 percent) pertained to work at Building 233-H that will be conducted as part of another project. These comments will be forwarded to the office managing that project for consideration and disposition. According to the Project Office, the 237 outstanding comments will be dealt with during the preliminary design phase of the project. The original intention was for the Formal Design Review Team to review the Project Office’s responses and for the chairman of the team to issue a “closure” memo (1) stating that the team had reviewed and agreed with the Project Office’s responses to its comments and (2) endorsing the conceptual design. As of January 1998, the Formal Design Review Team had not reviewed the Project Office’s responses and the chairman had not issued such a memo. Project Office officials informed us that relevant action plans will be completed by the spring of 1998, at which time the chairman could issue the memo. Given the overall favorable responses to the Tritium Extraction Facility’s conceptual design, it may have been prudent to proceed with the preliminary design phase in October 1997. However, the intent of having independent reviews was to enhance confidence in the conceptual design, and numerous concerns were identified, some of which the review teams considered to be important. None of the various approaches for handling the review teams’ comments resulted in reaching closure with the teams before the start of the preliminary design phase. A structured, consistent approach to resolving comments and obtaining concurrence would have helped ensure that the project received the maximum benefit from the reviews. Such a structured approach could apply in the future, as DOE intends to have an independent team review the Tritium Extraction Facility’s design after the preliminary design work is completed. We recommend that the Secretary of Energy establish guidelines for formally responding to and reaching closure within a reasonable time frame on comments made during future independent design reviews of the Tritium Extraction Facility project. We provided a draft of this report to DOE for its review and comment. Overall, DOE agreed with the facts contained in the report and concurred with the recommendation. DOE stated that it is instituting a tracking system in which all action items will be included with due dates and responsibility assignments for tracking and disposition. DOE had two specific comments. First, DOE stated that our report inferred that the Department began the preliminary design prematurely because not all of the review teams’ comments were resolved. As stated in our draft report, given the overall favorable responses to the conceptual design, we believe it may have been prudent to proceed with the preliminary design phase. However, as a general practice, we believe that to maximize the usefulness of a design review team’s comments, DOE should present the team with responses to each comment and reach closure with the team on how and when the comment will be resolved. By responding to the design review team’s comments in this manner, DOE would ensure agreement by all parties on the appropriate timing and proper course of action required to resolve the problems noted. In cases in which DOE disagrees with the comment, this type of formal response process could open a dialogue that could convince the design review team that no action is required or would at least provide a record of the reasons why DOE and the design review team chose to disagree. Second, DOE expressed the opinion that addressing contingencies in the schedule, as advocated by the Independent Review Team, is not a major concern. However, the chairman of the Independent Review Team still believes that the lack of this feature is significant. Both DOE and the Independent Review Team’s perspectives are presented in our report. We believe that this disagreement demonstrates why DOE needs a formal procedure for dealing with design review teams’ comments. In this case, DOE did not provide the Independent Review Team with responses to its comments, and there was no effort made to discuss and document areas of disagreement. As a result, the comment has not been resolved. The full text of DOE’s comments is included as appendix III. To obtain information on the major comments made by the review teams, we obtained and reviewed the teams’ reports. For the Independent Review Team, we also obtained the Project Office’s formal responses to the comments contained in the report. In its report, the Red Team formally listed its comments, and DOE had not initially formally responded to them. As a result, we analyzed the Red Team’s report to create a list of major comments, which we presented to DOE. DOE and members of the Red Team reviewed that list and agreed that it comprised the major comments of the report. DOE and the Red Team members then used our list during their December 1997 review of the Project Office’s responses to the Red Team’s report. We obtained the results of that review. At the time of our review, the Project Office had not formally responded to the Formal Design Review Team’s comments. To obtain information on the process DOE used to respond to the comments raised by the review teams, we reviewed the review teams’ charters; correspondence between the review teams, the Project Office, and DOE; and the teams’ reports and related documents. We also discussed the processes with DOE and Project Office officials and representatives from the review teams. We conducted our review from October 1997 through February 1998 in accordance with generally accepted government auditing standards. As arranged with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after the date of this letter. At that time, we will send copies of the report to the Secretary of Energy; the Secretary of Defense; and the Director, Office of Management and Budget. We will also make copies available to others on request. If you or your staff have any questions about this report, please call me at (202) 512-8021. Major contributors to this report include William F. Fenzel, Assistant Director, and Kenneth E. Lightner Jr., Senior Evaluator. Partially closed. Essential processes are in place; additional analysis has been either documented or is in process. Two recognized risks, the tritium extraction and remote handling processes, represent major vulnerabilities that need to be mitigated in the near term. A plan to develop the tritium extraction process and to mitigate risks is not evident. The subsystems for handling and opening the target rods are not proven applications of existing technology. Much of the remote handling will be first-of-a-kind applications. Each represents significant uncertainties, in terms of scope, cost, and schedule. The time and cost to engineer/develop the applications will be greater than the current plan estimates. Partially closed. An action plan for the tritium extraction and remote handling processes has been prepared. A proven mechanical system for tritium extraction has been incorporated into the conceptual design. Corrective action plans for the subsystems are being implemented. The cost to develop remote handling operations could still be significant. There are no clear limits for releases of radioactivity, requirements for confinement systems, or goals for minimizing workers’ exposure. Partially closed. A report defining requirements has been issued. Guidelines for minimizing workers’ exposure are being established. The conceptual design package is not an adequate basis to start preliminary design. It is insufficient to permit an architect engineering firm to independently proceed with the preliminary design. Closed. DOE completed an assessment of its readiness to proceed to the preliminary design. The team agreed with DOE’s decision to proceed. It is not evident that DOE has reviewed the lessons learned from other projects and applied them to the Tritium Extraction Facility project’s conceptual design and plan. Partially closed. The lessons learned from other projects have been identified and evaluated and are being incorporated into the project design and project implementation processes. Closed. Additional information has been added to the conceptual design report. The remote handling process is overly complex. Closed. Improvements have been made to the design for the remote handling process. A section devoted to the project’s schedule should be added to the conceptual design report. Open. Additional information on the schedule was added to the conceptual design report; however, the schedule does not include contingencies, which represents a high risk. There should be a section in the conceptual design report that discusses applicable design and construction codes and standards. Closed. References to applicable design and construction codes and standards have been added to the conceptual design report. Any segment of the facility should be designed totally in-house or totally subcontracted. Closed. DOE plans a joint effort by the Project Office and an architect engineering firm. The Independent Review Team’s chairman now agrees with this approach. The life-cycle cost analysis should include the number of target rods that must be processed to meet the facility’s production requirements. Open. The life-cycle cost analysis does not yet include the number of extractions required to meet the production requirements. The staffing levels proposed are excessive. The team’s chairman no longer considers this a major comment. The Process Development Program (a program to develop facility processes by using prototypes and mock-ups) should be accelerated. Closed. While the program has not been accelerated, DOE has recognized the risk to the project’s cost and schedule and will attempt to mitigate the risk. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the Department of Energy's (DOE) plans to build a Tritium Extraction Facility at its Savannah River Site in South Carolina and the reports of three different teams responsible for reviewing the project's conceptual design and related products, focusing on: (1) the major comments raised by the three reviews; and (2) the process used by DOE to respond to those comments. GAO noted that: (1) two of the teams that reviewed the Tritium Extraction Facility's conceptual design found the project's scope, cost, and schedule to be appropriate and found no issues that would necessitate reevaluating the project; (2) the third team made no overall comments on the project; (3) the three teams also had nearly 800 specific comments; (4) comments that the review teams considered to be significant related to: (a) the design of the remote handling and tritium extraction processes; (b) the need for the project's schedule to allow for contingencies that could occur in the process and equipment development; and (c) the adequacy of the level of detail in the conceptual design report; (5) DOE handled each review team's specific comments differently; (6) for one team, the Savannah River Project Office prepared a response to each comment, and DOE headquarters had three members of the original review team comment on the adequacy of the responses; (7) for comments made by the second team, the Project Office responded to all comments, but did not seek the team's review of the responses; (8) for the third review team's comments, DOE responded to each comment, but the design team has not yet reviewed the responses; (9) overall, DOE made many changes to the conceptual design because of the review teams' comments and appears to have been generally responsive to the comments; (10) however, some comments--such as the one related to a need to include contingencies in the projects' schedule--have not been resolved to the satisfaction of the review teams; and (11) nonetheless, DOE approved the conceptual design report and the project entered the preliminary design phase in October 1997.
DOD’s beneficiaries have four options for obtaining prescription drugs. They can pick them up directly from MTFs, network retail pharmacies, or nonnetwork retail pharmacies. They can also receive them in the mail through DOD’s TRICARE Mail Order Pharmacy. DOD operates 536 pharmacies at 121 of its MTFs. Each MTF may have multiple pharmacies. For example, San Diego maintains satellite pharmacies at several locations in addition to its main pharmacy, which has a separate section that dispenses outpatient refill prescriptions. Fort Hood and Kirtland each maintain a separate pharmacy to dispense outpatient refill prescriptions, and Fort Hood maintains several satellite pharmacies at health care clinics. In addition to pharmacies at its MTFs, DOD contracts with Express Scripts, Inc., a private pharmacy benefits management company, to operate DOD’s retail pharmacy program and its TRICARE Mail Order Pharmacy. For the retail system, Express Scripts has a network of over 54,000 retail pharmacies where DOD beneficiaries can pick up prescriptions; beneficiaries can also utilize nonnetwork pharmacies, that is, any retail pharmacy not in Express Scripts’ network. For the TRICARE Mail Order Pharmacy, beneficiaries submit their prescriptions to Express Scripts, which dispenses and mails the drugs directly to the beneficiary. Civilian beneficiaries pay copayments for drugs obtained through the mail or at retail pharmacies, but do not pay at MTFs. (See table 1.) Active duty service members do not pay copayments. For most drugs, all four options are available to DOD beneficiaries regardless of where they obtain health care services. For example, a beneficiary can obtain a prescription from a private or military physician and then choose to have the prescription filled at an MTF, a network or nonnetwork retail pharmacy, or the TRICARE Mail Order Pharmacy. However, DOD’s cost differs considerably depending on the delivery option the beneficiary chooses. (See table 2.) DOD’s average cost per 30-day prescription varies among the delivery options for a number of reasons, including differences in the price of drugs dispensed in each system, copayments, and administrative costs of dispensing the drugs. For example, DOD does not receive federal discounts when beneficiaries obtain drugs through retail pharmacies, so DOD’s costs for purchases at retail pharmacies are generally higher than at MTFs or through the TRICARE Mail Order Pharmacy. The administrative cost of dispensing drugs is not included in the MTF costs, but according to DOD officials, MTFs remain the least expensive of the three systems. However, an increasing number of DOD beneficiaries have chosen in recent years to use retail pharmacies (see fig. 1), which is DOD’s most expensive delivery option. As part of its pharmacy system, VA operates a mail pharmacy program, the CMOP, which uses automated equipment to dispense and mail prescriptions to beneficiaries. VA operates seven CMOP facilities, which dispensed about 88 million prescriptions in fiscal year 2004. In that year, CMOP facilities dispensed 76 percent of all VA prescriptions, including over 95 percent of refill prescriptions. Most of the remaining prescriptions were dispensed through pharmacies at VA’s hospitals and clinics. VA beneficiaries generally do not have the option to obtain prescriptions at retail pharmacies. DOD and VA have a number of drug procurement options available to them that can result in differences in drug prices. For example, DOD and VA have access to discounted drug prices through the federal supply schedule (FSS). The FSS is maintained by VA’s National Acquisition Center and is available to all federal purchasers. All FSS prices, regardless of which federal agency purchases the drug, include a fee of 0.5 percent of the price to fund the National Acquisition Center’s activities. DOD and VA also have access to federal ceiling prices, which are mandated by law to be 24 percent lower than nonfederal average manufacturer prices. For some drugs, DOD and VA negotiate, through national contracts or other agreements, prices that are even lower than FSS or federal ceiling prices. Generally, DOD and VA negotiate these contracts and agreements jointly, in which case they both pay the same price for the drug. However, when VA or DOD negotiates contracts and agreements separately, the two agencies may pay different prices for the same drug. In a few cases, individual VA medical centers or DOD MTFs have obtained lower prices through local purchase agreements with manufacturers than they could have through the national contracts, FSS, or federal ceiling prices. Differences in DOD and VA prices can also occur when the departments order the same drug in different package sizes or from different manufacturers. Two other factors account for the departments paying different prices for the same drugs. First, both DOD and VA use prime vendors, which are drug distributors, to purchase drugs from manufacturers and deliver them to DOD or VA facilities. As of June 2004, VA used one prime vendor, while DOD used five prime vendors, each one servicing different geographic areas. Both departments receive discounts from their prime vendors that further reduce the prices that DOD and VA pay for drugs. For DOD, the discounts vary among prime vendors and the areas they serve. As of June 2004, VA’s prime vendor discount was 5 percent, while DOD’s discounts averaged about 2.9 percent within the United States. Discounts from the prime vendors serving the three pilot MTFs averaged about 3 percent. Second, the price of drugs purchased directly by DOD facilities or the TRICARE Mail Order Pharmacy included a 1.7 percent fee to fund the Defense Supply Center’s activities. Figure 2 shows the various components of DOD and VA drug prices. During fiscal year 2003, DOD and VA conducted a pilot program to assess the feasibility of dispensing outpatient refill prescriptions for DOD beneficiaries using a VA CMOP. Under the program, the CMOP in Leavenworth, Kansas, dispensed prescriptions for three DOD MTFs—Fort Hood, Kirtland, and San Diego. Using automated phone systems for ordering prescription refills—already in place at the three pilot MTFs— beneficiaries chose whether to have each prescription refilled at the CMOP or at the MTF. Once a beneficiary chose the option to have the CMOP dispense a refill, the prescription was electronically transmitted from the MTF to the CMOP. The CMOP then purchased drugs—or used drugs already in inventory—to dispense each prescription. The CMOP mailed each refill prescription directly to the beneficiary. After sending the refill prescription, the CMOP sent a report of its activity back to the MTF, which maintained responsibility for patient care. During the pilot program, the VA CMOP distributed only prescription refills—no original prescriptions and no controlled substances—to DOD beneficiaries, although the CMOP routinely dispenses them for VA beneficiaries. The TRICARE Management Activity (TMA) paid both drug and administrative costs of the pilot program to VA during fiscal year 2003. DOD beneficiaries did not pay a copayment or any other charge for the drugs they received from the CMOP, the same as if they had obtained the drugs at an MTF. As of April 2005, two of the three MTFs, San Diego and Kirtland, continued to have prescriptions filled through the VA CMOP. Fort Hood ended its CMOP participation at the end of fiscal year 2003 when TMA informed the three MTFs that it would not fund administrative or drug costs for CMOP- dispensed drugs in fiscal year 2004. TMA later decided to pay administrative costs, so, for fiscal year 2004, San Diego and Kirtland paid only drug costs. In fiscal year 2003, during the pilot program, beneficiaries chose to have the VA CMOP fill a combined 47 percent of the prescription refills that usually would have been handled at the three pilot site MTFs. In fiscal year 2004 at San Diego and Kirtland, the two sites that continued CMOP participation, beneficiaries chose to have the CMOP fill a combined 65 percent of the outpatient pharmacy refill prescriptions. The remaining outpatient refill prescriptions were dispensed by MTF pharmacies. DOD could achieve savings by taking advantage of VA’s generally lower drug prices if it used the VA CMOP to dispense its outpatient pharmacy refill prescriptions. Estimated savings from the 90 drugs included in our price comparison plus estimated savings from the other drugs dispensed in the pilot during fiscal year 2003 total $646,000, or about $1.39 per prescription. Additional savings would also be possible if the CMOP were made aware of and used lower prices that DOD has negotiated for some drugs. However, achieving savings would require closing MTF outpatient pharmacy refill operations to offset CMOP administrative expenses. In addition to demonstrating that financial savings are possible, the pilot produced nonmonetary benefits such as providing high-quality service as indicated by measurements of beneficiary satisfaction and rates of accurate and timely distribution of drugs, reducing automobile traffic congestion and pharmacy wait times, and freeing DOD resources for its core mission of supporting military readiness. Our analysis showed that June 2004 VA CMOP drug prices were generally lower than prices at the DOD MTFs. Based on the differences in drug prices that existed in June 2004, we estimate that for these 90 drugs the three pilot sites produced savings during fiscal year 2003 for DOD of about $437,000, or about 4 percent. For these drugs, the estimated savings averaged $2.74 per prescription. We estimated these savings by comparing the June 2004 prices that the CMOP and DOD paid for 90 of the drugs with the highest total costs that were dispensed at Fort Hood, Kirtland, and San Diego by the CMOP during the fiscal year 2003 pilot program. (See app. I for the methodology we used to select these drugs.) These drugs comprised 65 percent of total drug costs in the pilot. We did not obtain individual prices for the drugs that comprised the remaining 35 percent of pilot drug expenditures. Therefore, we do not know what, if any, specific differences exist in DOD’s and VA’s prices for these drugs. However, general differences in DOD and VA drug purchasing apply to all the drugs. As of June 2004, VA received a 5 percent price discount from its prime vendor, and the three pilot MTFs received price discounts averaging 3 percent from their prime vendors. In addition, DOD’s Defense Supply Center charged a fee of 1.7 percent for MTF drug purchases. These differences amount to VA’s drug prices being about 3.7 percent lower than DOD’s. Applying a 3.7 percent reduction to the remaining 35 percent of drug expenditures yields overall estimated savings of about $209,000, which amounts to $0.69 per prescription for the drugs in the pilot that were not included in our analysis. We estimate that the combined savings from the 90 drugs and the other drugs dispensed through the pilot in fiscal year 2003 total $646,000, making VA’s total drug costs during the pilot approximately 3.9 percent less than DOD costs, or approximately $1.39 less per prescription. If the three MTFs had been able to achieve the same savings per prescription and had fully utilized the pilot for all their outpatient refill prescriptions in fiscal year 2003—including those dispensed through the CMOP and those dispensed at the MTFs—drug cost savings during fiscal year 2003 could have been about $1.5 million. DOD could have realized even greater savings if the VA CMOP were made aware of and used DOD’s lower negotiated price for some drugs. About 15 percent of the prices for the 90 drugs in our price comparison were more expensive for DOD MTFs when purchased through the VA than if they had been acquired through DOD purchase agreements. For example, MTFs involved in the pilot paid an average of $0.64 in June 2004 for each 30 mg capsule of lansoprazole, a drug that stops production of stomach acid and is prescribed for conditions such as gastroesophageal reflux disease, based on an agreement with the drug’s manufacturer. When ordering through the CMOP, however, the pilot sites paid a higher price for lansoprazole—$1.77 per capsule in June 2004—which was based on the FSS price. DOD could obtain the lower prices it has negotiated, according to CMOP officials, if the MTFs ordered these drugs through their prime vendors at DOD prices and had them delivered to the CMOP for distribution to DOD patients. Another way to achieve lower drug prices, they said, would be for MTFs to obtain rebates from drug manufacturers for the difference between the CMOP price and the lower DOD price. For example, San Diego began to use this process in fiscal year 2004. Officials at the MTF expect to receive rebates from drug manufacturers of over $300,000 for drugs purchased during the first quarter of fiscal year 2005. Based on our comparison of June 2004 drug prices for the 90 drugs in our analysis, we estimate that if DOD’s lower prices had applied to the 15 percent of those drugs with lower prices at the MTFs than at the CMOP—either by MTFs having the drugs delivered to the CMOP through their prime vendors or obtaining rebates from drug manufacturers—DOD would have saved an additional $500,000 in drug costs during fiscal year 2003. Since DOD beneficiaries chose to use the VA CMOP for 47 percent of their outpatient refill prescriptions in fiscal year 2003, the MTFs’ refill workload was not eliminated. For example, the three MTFs dispensed about 79,000 refill prescriptions in September 2002, the month before the pilot began, and dispensed about 37,000 prescriptions in September 2003, during the pilot. The outpatient refill workload that remained at the MTFs required that the MTF outpatient pharmacy refill operations remain open and maintain personnel and equipment to dispense refills. Because most of the MTFs’ costs of dispensing refills are for personnel and equipment, according to officials at the three MTFs, the decreased workload did not lead to a proportional decrease in costs. For dispensing drugs through the VA CMOP during the pilot, DOD agreed to pay the CMOP’s average administrative cost, which includes the cost to mail prescriptions to beneficiaries. Because of a change in the way the CMOP computed administrative costs in fiscal year 2003, DOD paid VA $2.36 prior to July 2003 and $2.27 from July 2003 to the end of the fiscal year, on average per prescription to cover these costs. These costs include VA’s average administrative costs to fill each prescription of $1.34 prior to July 2003 and $1.24 from July 2003 to the end of the fiscal year, plus mailing costs of $1.02 and $1.03, respectively. We estimate that DOD’s administrative costs at the three MTFs were about $2.31 per refill prescription—roughly equal to the administrative costs of obtaining refill prescriptions through the CMOP and mailing them to beneficiaries. Consequently, closing MTF outpatient pharmacy refill operations would offset CMOP administrative expenses and yield drug cost savings for DOD from its use of the CMOP. (See app. III for a calculation of DOD’s and VA’s administrative cost.) The pilot also produced nonmonetary benefits. Based on VA’s measurements of beneficiary satisfaction and rates of prescription accuracy and timeliness, the VA CMOP provided high-quality service to DOD beneficiaries. However, because the pilot MTFs and the CMOP used different methods for measuring accuracy and because DOD did not conduct satisfaction and timeliness surveys for the three pilot MTFs, we could not make a meaningful comparison between the two dispensing options. Regarding the VA CMOP’s performance for fiscal year 2003, 97 percent of DOD beneficiaries surveyed by VA rated their overall satisfaction with the services it provided as excellent or very good. This rate is even higher than the 91 percent of surveyed VA patients who rated the CMOP’s performance as excellent or very good in that year. In addition, for fiscal year 2003, the CMOP reported that more than 99.9 percent of its prescriptions were accurately dispensed, meaning that beneficiaries received the correct medications in the correct amounts, with no damage or labeling problems. Finally, the CMOP was able to deliver drugs to DOD beneficiaries on average in 3.5 days from the time the prescription was requested to the time it was received by the patient. To put VA’s delivery time in some perspective, a company that has one of the country’s largest private mail order pharmacy operations estimates that its customers typically receive their mail order refill prescriptions in 3 to 5 days. Another benefit, reported by DOD officials, was that use of the VA CMOP helped reduce the number of civilians coming to military installations. Because most prescriptions dispensed at MTFs were for civilian retirees and their dependents (see table 3), using the CMOP to dispense some of the prescriptions helped reduce facility overcrowding. For example, San Diego and Fort Hood officials reported less crowding and shorter waiting times at their MTF pharmacies during the pilot, and San Diego officials reported less automobile traffic congestion and fewer parking shortages. In addition, a Fort Hood official reported that after the CMOP pilot was terminated, lines at the main pharmacy got very long and beneficiaries had to wait 2 or more hours to have prescriptions dispensed. Moreover, these officials told us that using the CMOP could fill a critical need during times of heightened security because civilian beneficiaries might have difficulty getting onto military installations to pick up their prescriptions at MTF pharmacies. According to DOD officials, using the VA CMOP could allow DOD pharmacy staff to focus on DOD’s core mission of supporting military readiness by serving the pharmacy needs of active duty members and their dependents. They said that the pilot, to the extent that it moved civilian workload away from MTFs, was consistent with DOD’s emphasis on having military personnel support military readiness. If a greater percentage of MTFs’ workload was moved to the CMOP, then MTFs could have additional flexibility to focus on military readiness needs. In addition, DOD officials told us that transferring the outpatient refill pharmacy workload to the CMOP could help in other ways, such as allowing the department more flexibility to redeploy pharmacy staff to clinical services. The pilot demonstrated that DOD could achieve cost savings at very high levels of beneficiary satisfaction by delivering drugs to beneficiaries using the CMOP rather than MTF outpatient refill operations. Additional cost savings could be realized if the CMOP were made aware of and used lower prices that DOD had negotiated for some drugs. However, DOD savings are dependent on closing the refill portion of its MTF pharmacy operations to avoid paying MTF administrative costs for refills in addition to administrative costs charged by the VA CMOP. While DOD’s use of the CMOP is a significant opportunity for DOD to achieve savings and expand its sharing of resources with VA, there are other cost implications that could become important if MTF refill operations were closed with the expectation that beneficiaries would use the CMOP. Specifically, rather than obtaining drugs from the CMOP, beneficiaries might choose instead to obtain their drugs from a more costly option for DOD, such as retail pharmacies. Any cost increases will challenge DOD to find more efficient ways to manage its pharmacy benefits program, such as by encouraging beneficiaries to choose the most cost-effective options for where they obtain their drugs. We received written comments from DOD and VA on a draft of this report. VA concurred with our draft report. VA stated that our report would benefit from a discussion of market pressures that control the cost of generic drugs. However, these pressures were reflected in our work that focused on the lowest prices VA and DOD could secure, which included purchasing generic drugs. VA’s written comments are reprinted in appendix V. DOD made an overall comment that our report was technically accurate. It made additional comments that we address below. One comment concerned our characterization of refunds from drug manufacturers. During our audit work DOD pharmacy officials told us that they expect that manufacturer refunds will cover only a small portion of the difference in cost between retail and MTF prices, and we included this information in our draft report. However, in its letter providing the agency’s comments, DOD commented that this statement is inaccurate and misleading, so we removed it from the report. DOD also commented that the 1.7 percent fee charged on DOD drug purchases should be considered in the context that it supports DOD’s readiness mission. Specifically, DOD stated that reducing the amount of drugs upon which the fee is paid would cost DOD “somewhere else” to support the mission. We disagree, and based on our findings, we believe that more money would be available for DOD’s use by using VA’s CMOP. For example, drugs purchased during the pilot by VA’s CMOP were about 3.9 percent less than if they had been purchased by the MTFs. In addition, DOD stated that it is not correct that DOD would always realize a savings on the acquisition cost of a drug by using the VA CMOP. We noted in the draft report that we found VA’s prices to be generally, but not always, lower than DOD’s. We noted that in some cases drugs were more expensive for DOD MTFs when purchased through the VA than if they had been acquired through DOD purchase agreements, and that additional cost savings could be realized if the CMOP used these lower prices that DOD had negotiated for some drugs. DOD stated that it is unlikely that it could move all refill prescriptions to the CMOP, and asserted that GAO recommended closing all MTF refill services and providing them only to active duty members. However, our report makes no such recommendation. Although cost savings through the CMOP are dependent on closing MTF outpatient pharmacy refill operations, we noted in the draft report that MTFs could continue to dispense outpatient refill prescriptions at MTF main pharmacies. As noted in the draft report, in fiscal year 2003, during the pilot program, 47 percent of the prescription refills that usually would be handled at the three pilot MTFs were dispensed at the CMOP. In fiscal year 2004 at San Diego and Kirtland, the two sites that continued CMOP participation, program participation increased as the CMOP filled 65 percent of the outpatient pharmacy refill prescriptions. Determining whether to encourage beneficiaries to use the most cost-effective dispensing method, which would assure that savings are achieved while continuing to provide high- quality pharmacy service to beneficiaries, is part of DOD’s responsibility to manage its pharmacy program in a fiscally sound manner. DOD agreed that the pilot produced other benefits, such as reducing facility traffic congestion, but further stated that our reference to “civilian beneficiaries” could be misinterpreted to include beneficiaries not currently covered, and should be defined as “retiree beneficiaries.” We believe that our use of the term “civilian beneficiaries” is appropriate because, as DOD’s data show, 85 percent of MTF 30-day outpatient refill prescriptions in both fiscal years 2003 and 2004 were for retirees and their dependents, and other civilians and their dependents. DOD also commented that patient choice as a DOD pharmacy benefit is a lawful entitlement. According to DOD, it cannot mandate DOD beneficiaries to utilize one option over another, and such a restriction would require legislative action. We note, however, that DOD has taken action to influence beneficiary behavior to choose one option over another option, for example, by increasing copayment amounts to help it manage the pharmacy benefit and control costs. DOD’s pharmacy benefit regulations state that “the higher cost-share paid for prescriptions dispensed by a non-network retail pharmacy is established to encourage the use of the most economical venue to the government.” This type of action demonstrates fiscal responsibility on DOD’s part while it strives to provide cost-effective pharmacy services to its beneficiaries. Finally, DOD stated that we assumed that current options are more costly for DOD than having beneficiaries obtain their drugs from the CMOP, and that this was a subjective conclusion. We based our conclusion on our finding that the CMOP’s drug costs during the pilot were approximately 3.9 percent lower than the costs for the same drugs at the three pilot MTFs. In addition, we found that the administrative costs for dispensing refill prescriptions were about the same at the MTFs and at the CMOP. And, as noted in the draft report, the CMOP’s drug costs and administrative costs were lower than the drug and administrative costs for DOD’s TRICARE Mail Order Pharmacy. DOD also included technical comments that we incorporated where appropriate. DOD’s written comments are reprinted in appendix VI. As we agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution of it until 30 days from the date of this letter. We will then send copies to the Secretaries of Veterans Affairs and Defense, and relevant congressional committees. We will also make copies available upon request. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me on (202) 512-7101 or Michael T. Blair, Jr. on (404) 679-1944. William Simerl and Richard Wade made key contributions to this report. To address our objective, we compiled information on the operations of the Department of Defense (DOD) and the Department of Veterans Affairs (VA) Consolidated Mail Outpatient Pharmacy (CMOP) pilot program, and we compared the costs of purchasing and dispensing drugs at the CMOP that dispensed drugs for the pilot with the costs at the pilot military treatment facilities (MTF). To compile information on the pilot program and on related aspects of DOD’s and VA’s pharmacy programs, we conducted site visits, reviewed program documentation, and interviewed DOD and VA officials responsible for purchasing and dispensing drugs. We interviewed or collected documentation from officials at the VA CMOP involved in the pilot located in Leavenworth, Kansas, including the national CMOP director; officials at each of the three DOD MTFs involved in the pilot—Darnall Army Community Hospital, Fort Hood, Texas (Fort Hood); the 377th Medical Group, Kirtland Air Force Base, New Mexico (Kirtland); and the Naval Medical Center San Diego, San Diego, California (San Diego); DOD pharmacy officials, including the director of DOD pharmacy programs and pharmacy officials for the Air Force, Army, and Navy; officials at DOD’s Pharmacoeconomic Center; and officials at VA’s National Acquisition Center and DOD’s Defense Supply Center, responsible for procurement of drugs. To compare the drug costs at the VA CMOP and the participating MTFs, we selected 90 of the drugs with the highest total expenditures dispensed through the pilot during fiscal year 2003. These 90 drugs, due to high volume, high unit cost, or both, comprised about 65 percent of total drug costs for the pilot. To select drugs for our analysis, we first identified the 100 drugs with the highest total expenditures dispensed through the pilot in fiscal year 2003. We then obtained available price information for June 2004 purchases of these drugs at the CMOP in Leavenworth, Kansas and the three MTFs that participated in the pilot. We used June 2004 prices for each drug because DOD and VA officials told us that June 2004 data were the most reliable data available. According to the officials, because drugs can have many different prices throughout the year, obtaining DOD prices that can be accurately compared to the full range of prices that VA paid for drugs throughout fiscal year 2003 was not feasible. We evaluated the quality of the drug pricing data by checking for missing and inconsistent values and interviewing agency officials, including those from VA’s CMOP, VA’s National Acquisition Center, DOD’s Pharmacoeconomic Center, and DOD’s Defense Supply Center. Based on these interviews and on documentation obtained from the officials, we considered differences between DOD and VA drug prices caused by separate pricing agreements, differences in prime vendor discounts, differences in fees to fund drug procurement, differences in drug package sizes, and, for some drugs, differences in manufacturers. We eliminated drugs from our analysis in cases where differences in the prices for them at the various locations could not be explained by these factors, in cases where DOD officials believed the drug pricing to be erroneous, or in cases where June 2004 drug pricing was unavailable. After eliminating these drugs, 90 of our original 100 drugs remained. We also adjusted for differences in DOD and VA unit measurements to ensure that the unit prices were comparable to each other. We estimated VA CMOP drug costs during fiscal year 2003 for each of the 90 drugs by multiplying the CMOP’s June 2004 unit price by the number of units dispensed by the CMOP for each MTF during fiscal year 2003. Using the same method for costs at the three MTFs—multiplying MTF June 2004 unit prices by the number of units dispensed by the CMOP for each MTF during fiscal year 2003—we estimated the amount that the three DOD MTFs would have spent on the same drugs. The difference between VA’s and DOD’s total estimated costs for the 90 drugs during fiscal year 2003 is our estimate of savings for these drugs during the pilot. In cases where no units of a drug were ordered through the pilot by an MTF during fiscal year 2003, the price of that drug at that location was not included in our comparison. We did not obtain individual prices for the drugs that comprise the remaining 35 percent of pilot drug expenditures. Therefore, we do not know what, if any, differences exist in the VA’s and DOD’s prices for these drugs. For these drugs, we estimated differences in drug prices as of June 2004 based on differences in prime vendor discounts and the fee charged by DOD’s Defense Supply Center, which are general differences in DOD and VA drug pricing that apply to all drugs. To compare the administrative costs of dispensing refill prescriptions at the CMOP with the costs at MTFs participating in the pilot, we collected cost information from program officials and evaluated it to ensure that it was comparable to the costs from the other sites. Although DOD generally does not separate information on MTF administrative costs, we were able to obtain this information for refill prescriptions at the three MTFs. Our cost comparison included the costs of personnel, equipment, supplies, space, utilities, and other aspects of refill operations. Although precise cost information was not always available, we reviewed the information and interviewed officials at each site to determine that it was sufficiently reliable for the purposes of our cost comparison. Because moving refill workload to the CMOP without decreasing fixed costs could inflate the average MTF administrative cost per prescription, we used the best available information to estimate the per prescription administrative costs for dispensing refill prescriptions at the three DOD MTFs as if the CMOP pilot did not exist. For Fort Hood, we obtained information on administrative costs for calendar year 2004 after officials had discontinued use of the CMOP and reorganized the outpatient refill pharmacy to separate it from the main pharmacy in January 2004. For Kirtland, we obtained cost information for fiscal year 2003. Although the pilot was operating during this time, Kirtland officials indicated that they had not changed any fixed costs, such as personnel or equipment, due to the pilot. To estimate the number of refill prescriptions that the Kirtland pharmacy would have filled if the CMOP pilot had not been operating, we added the number of outpatient refill prescriptions filled through the CMOP for Kirtland beneficiaries to the number of outpatient refill prescriptions dispensed at the Kirtland pharmacy. Because the operating costs for Kirtland were incurred while the number of MTF prescriptions was lower due to the CMOP operation, we had to adjust the variable costs to correspond with the higher number of prescriptions that the MTF would have dispensed without the CMOP. Therefore, we used the total number of outpatient refill prescriptions that the Kirtland pharmacy would have filled if the CMOP pilot had not been operating to estimate variable costs, such as bottles, labels, and other supplies. We also used this total number of prescriptions when determining the overall average cost of dispensing refill prescriptions at the MTFs. San Diego has been participating in the CMOP program since the start of fiscal year 2003, and has made changes to its pharmacy operations, such as changes to staffing, due to CMOP use. To estimate the cost of refill prescriptions without influence from the CMOP pilot, San Diego officials provided us with information on costs and the number of refill prescriptions from fiscal year 2002, before the pilot began operation. Appendix III contains the information we obtained from the pilot sites and VA to estimate MTF and CMOP administrative costs. To compare the VA CMOP with DOD’s TRICARE Mail Order Pharmacy, we interviewed or obtained documentation from officials at VA’s CMOP; VA’s National Acquisition Center; DOD’s Defense Supply Center; DOD’s Pharmacoeconomic Center; and the TRICARE Mail Order Pharmacy contractor, Express Scripts, Inc. To compare drug costs between the CMOP and the TRICARE Mail Order Pharmacy, we selected the 100 drugs with the highest total costs dispensed during the first year of the TRICARE Mail Order Pharmacy program (March 2003-February 2004). Next, we obtained June 2004 prices for these drugs for the CMOP and the TRICARE Mail Order Pharmacy. We used June 2004 prices for each drug to ensure comparability since drug prices can vary significantly over time, and because DOD and VA officials told us that June 2004 data were the most reliable data available. We eliminated 11 drugs from our comparison because prices were unavailable or due to inconsistencies in the data that we could not explain. We compared prices for each of the remaining 89 drugs, adjusting for differences in VA’s and DOD’s drug data, such as unit measurement differences. To estimate annual cost differences for the drugs in our comparison, we multiplied the June 2004 DOD and VA unit prices by the number of units ordered for each drug during the first year of the TRICARE Mail Order Pharmacy program, from March 2003 to February 2004. We conducted our work from April 2004 through May 2005 in accordance with generally accepted government auditing standards. DOD TRICARE Mail Order Pharmacy Express Scripts, Inc. Under VA’s system, the CMOP shares responsibility for pharmacy services with VA medical centers. Under DOD’s system, the TRICARE Mail Order Pharmacy handles the entire prescription-filling process, separate from pharmacies in DOD’s military treatment facilities. The CMOP dispenses and mails prescriptions. VA medical centers provide other services, such as verifying patients’ eligibility, providing customer service, or contacting providers and patients when necessary. In addition to dispensing and mailing prescriptions, the TRICARE Mail Order Pharmacy conducts activities such as verifying patients’ eligibility in DOD’s computer system, providing customer service, contacting providers or patients for additional information when necessary, and converting paper prescriptions to electronic format. 77,876,597 (fiscal year 2003) 5,472,583 (March 2003 through February 2004) 87,968,560 (fiscal year 2004) $2.24 per prescription (fiscal year 2003) $2.35 per prescription (fiscal year 2004) $10.66 which included $10.20 per prescription and an average of $0.46 per prescription for customer service incentives (March 2003 through February 2004). For VA patients, $7 for up to 30 day supply. $3 generic; $9 brand for up to 90 day supply. DOD beneficiaries did not pay a copayment or any other charge for the drugs they received from the CMOP, the same as if they had obtained the drugs at an MTF. Active duty service members do not pay copayments. DOD has established a new copayment of $22 per prescription for drugs designated “non-formulary.” As of April 27, 2005, DOD had designated three non-formulary drugs that are subject to the copayment. VA does not charge copayments for medications to treat service-connected conditions, nor does it assign copayments to veterans with service- connected conditions rated 50 percent disabling or greater. VA’s fiscal year 2003 customer satisfaction surveys indicated that 92 percent of all beneficiaries who responded rated the CMOP’s services as excellent or very good. In the same surveys, 97 percent of DOD beneficiaries who responded rated the CMOP’s services as excellent or very good. DOD conducted four surveys of TRICARE Mail Order Pharmacy beneficiaries for the period of March 2003 through February 2004. TRICARE Mail Order Pharmacy program satisfaction rates for beneficiaries who responded ranged from 87 percent in the first of the surveys to 97 percent in the most recent of the four surveys. VA reports that the CMOP accuracy rate exceeded 99.9 percent for fiscal year 2003. Express Scripts reports that the TRICARE Mail Order Pharmacy accuracy rate exceeded 99.9 percent for the period from March 2003 through February 2004. To estimate drug prices for the two programs, we selected the 100 drugs with the highest total costs dispensed during the first year of the TRICARE Mail Order Pharmacy (March 2003-February 2004). Next, we obtained June 2004 prices for these drugs for the CMOP and the TRICARE Mail Order Pharmacy. We eliminated 11 drugs from our comparison because prices were unavailable or due to inconsistencies in the data that we could not explain. For each of the remaining 89 drugs, we adjusted for differences in DOD’s and VA’s drug data, such as unit measurement differences. To estimate annual costs for the drugs in our comparison, we multiplied the June 2004 DOD and VA unit prices by the number of units ordered for each drug during the first year of the TRICARE Mail Order Pharmacy, from March 2003 to February 2004. For more information on our scope and methodology, see app. I. CMOP and TRICARE Mail Order Pharmacy drug prices can differ for a number of reasons, including separate contracts or other agreements with manufacturers, different prime vendor discounts negotiated by DOD and VA, and different DOD and VA fees for procuring drugs.
There has been long-standing congressional interest in whether the Department of Defense (DOD) could use the Department of Veterans Affairs (VA) Consolidated Mail Outpatient Pharmacy (CMOP) system as a cost-effective alternative to beneficiaries picking up outpatient refill prescriptions at DOD military treatment facilities (MTF). To evaluate this possibility, DOD and VA conducted a pilot program in fiscal year 2003 in which a VA CMOP provided outpatient pharmaceutical refill services to DOD beneficiaries served through three MTFs. GAO was asked to estimate cost savings that could be achieved if DOD used VA's CMOP instead of MTF pharmacies for outpatient refill prescriptions, and what other benefits were achieved at the three pilot sites. To estimate potential cost savings and determine what other benefits were achieved, GAO reviewed pilot and pharmacy program documentation and interviewed DOD and VA officials responsible for purchasing and dispensing drugs. GAO also compared drug and administrative costs of dispensing outpatient refills through the fiscal year 2003 pilot program with the costs of dispensing the refills at the three DOD MTFs that participated in the pilot. DOD could achieve savings if it used VA's CMOP to dispense its outpatient refill prescriptions by taking advantage of VA's generally lower drug prices. Based on the drugs dispensed through the pilot, GAO estimated that the three MTFs that participated in the CMOP pilot program in fiscal year 2003 could have saved about $1.39 per prescription in drug costs, or a total of about $1.5 million, if the MTFs moved all their refill prescriptions to the CMOP. However, while DOD saved money on drug costs at the pilot MTFs, these savings were offset because DOD paid administrative costs for refill operations twice--first to pay VA for the administrative costs charged by the CMOP and second to maintain outpatient pharmacy refill operations at the MTFs. Consequently, achieving savings would require closing MTF outpatient pharmacy refill operations to offset CMOP administrative expenses. In addition to demonstrating that financial savings are possible, the pilot produced nonmonetary benefits. MTF officials reported benefits such as reduced automobile traffic congestion and shorter pharmacy waiting times because many civilian beneficiaries at the pilot sites no longer came to MTFs to pick up refill prescriptions. Further, DOD beneficiaries who participated in the pilot program reported satisfaction with the CMOP's accurate and timely distribution of pharmaceuticals. There are other potential cost implications for DOD if it decides to close MTF outpatient refill pharmacies and move the workload to the VA CMOP. Because DOD beneficiaries are allowed to choose among various options for obtaining drugs, they would be able to obtain their drugs from retail pharmacies and DOD's mail order pharmacy instead of the CMOP. These options, however, are more costly for DOD than having beneficiaries obtain their drugs from the CMOP. Consequently, if DOD closes the outpatient refill pharmacies at the pilot sites with the expectation that beneficiaries would use the CMOP and they did not, DOD's costs could increase. Any cost increases will challenge DOD to find more efficient ways to manage its pharmacy benefits program, such as by encouraging beneficiaries to choose the most cost-effective options for where they obtain their drugs. We provided a draft of this report to VA and DOD for comment. VA said that it concurred with the draft report and DOD said that it was technically accurate but neither explicitly concurred nor nonconcurred.
GPRA is intended to shift the focus of government decisionmaking, management, and accountability from activities and processes to the results and outcomes achieved by federal programs. New and valuable information on the plans, goals, and strategies of federal agencies has been provided since federal agencies began implementing GPRA. Under GPRA, annual performance plans are to clearly inform the Congress and the public of (1) the annual performance goals for agencies’ major programs and activities, (2) the measures that will be used to gauge performance, (3) the strategies and resources required to achieve the performance goals, and (4) the procedures that will be used to verify and validate performance information. These annual plans, issued soon after transmittal of the president’s budget, provide a direct linkage between an agency’s longer term goals and mission and day-to-day activities. Annual performance reports are to subsequently report on the degree to which performance goals were met. The issuance of the agencies’ performance reports, due by March 31, represents a new and potentially more substantive phase in the implementation of GPRA—the opportunity to assess federal agencies’ actual performance for the prior fiscal year and to consider what steps are needed to improve performance and reduce costs in the future. DOT’s mission is to ensure a safe transportation system that furthers our vital national interests and enhances the quality of life of the American people. The agency has identified five strategic goals for achieving that mission: (1) eliminating transportation-related deaths and injuries; (2) shaping an accessible, reliable transportation system; (3) supporting economic growth; (4) protecting the environment; and (5) ensuring the security of the transportation infrastructure and the country. DOT’s combined performance plan and report is organized around these strategic goals. Table 1 illustrates how the report’s four key outcomes and their supporting performance measures correspond to DOT’s strategic goals. This section discusses our analysis of DOT’s performance in achieving selected key outcomes and the strategies the agency has in place, particularly strategic human capital management and information technology, for accomplishing these outcomes. Although DOT did not include specific human capital or information technology strategies for the four outcomes we reviewed, the Department did address these strategies in other parts of its performance report. In discussing the outcomes, we have also provided information drawn from our prior work on the extent to which the agency provided assurance that the performance information it is reporting is credible. For fiscal year 2000, DOT’s progress in achieving fewer transportation- related accidents, deaths, injuries, and property losses was limited. The Department reported the least success in highway safety—it did not meet any of its goals in this area. For example, DOT failed to meet its goals for reducing highway-related fatalities and injuries, despite meeting both goals in 1999. DOT’s best performance was in marine and hazardous material transport safety—it met all of its goals in these areas. DOT’s progress in and strategies for achieving its fiscal year 2000 goals are discussed below for the areas of highway, aviation, marine, pipeline, rail, and transit safety. Regarding highway safety, DOT had expected to achieve at least three of the six key goals, but it did not meet any of them (see table 2). For example, DOT did not meet its goal of less than 4,934 large truck-related fatalities in 2000. On the basis of estimated data, DOT reported that 5,307 fatalities involving large trucks occurred in 2000, which was a slight improvement from 1999 when 5,362 large truck-related fatalities occurred. DOT provided explanations for not achieving its highway safety goals. For example, DOT attributes the increase in highway fatalities to a continued increase in motorcycle fatalities and deaths of young drivers. For three of the unmet goals, performance improved compared to 1999. For example, DOT has made progress in seatbelt use; the 71-percent rate of front seat occupants using seat belts achieved in 2000 represents the highest in our nation’s history. This rate, however, was below DOT’s goal to increase seat belt usage to 85 percent. DOT’s highway safety data for 2000 are all preliminary estimates. The report notes that timeliness of performance information is a significant limitation of data from outside the agency— highway safety data comes from police reports and state data—and preliminary estimates are based on extrapolations of partial-year data. DOT reports that its data on highway fatalities and injuries have been in use for many years and are generally accepted as accurate. However, we have reported that DOT lacks high-quality, up-to-date information on the causes of large truck crashes; and, as a result, DOT has begun to improve its data on the causes of these crashes. For the most part, DOT’s strategies to improve highway safety appear reasonable; however, they may be insufficient to meet many of the agency’s 2001 goals in this area (see table 3). Some strategies, such as improving and expanding oversight and enforcement activities and improving safety data collection concerning motor carriers, address concerns that have been raised by GAO and DOT’s IG (see app. I). The strategies also include a large-scale safety awareness program targeted at teenage drivers and the Safe Communities program, which provides grants to develop and implement community-based transportation safety programs. DOT’s evaluation of the Safe Communities program showed success in some communities, such as Dallas, TX, where the use of child safety seats more than doubled following the implementation of a child safety seat loaner program. The agency is also proposing an increase in the resources available to counter the increase in fatality and injury rates by more than $30 million in fiscal year 2002. DOT does not indicate any activities to specifically address motorcycle fatalities, which had increased 8 percent from 1999 to 2000. In the area of aviation safety, DOT reported that it met its fiscal 2000 goal for reducing the rate of fatal commercial aviation accidents but did not achieve its other two goals—to reduce the rate of runway incursions that create dangerous situations that can lead to serious accidents and reduce the rates of air traffic operational errors and deviations. (See fig. 1.) For the three unmet goals, performance worsened compared to 1999, and DOT is skeptical that it will be able to reach the goals in 2001. DOT attributed these trends to improved reporting and tracking as well as greater flight volume in congested and restricted airspace. DOT’s performance report indicated that there is no significant and/or systematic error in the counts of accidents, runway incursions, operation errors, or operational deviations. In addition, the Department reported that it regularly checks or validates these data sets. On the basis of a joint government/industry working group determination and a GAO recommendation, DOT has switched to the use of departures rather than flight hours for determining the rate of air carrier fatalities. To improve aviation safety, FAA is implementing several initiatives designed to address shortcomings in training; technology; communications; and airport signs, marking, and lighting. FAA will also investigate the use of new safety technologies. In addition, to reduce runway incursions, the agency is undertaking initiatives to improve communications among controllers, pilots, and ground crews. The aviation strategies appear reasonable. Nonetheless, DOT reported that it was unsure of meeting any of these aviation safety goals in 2001. In the marine sector, DOT reported that it met all three of its safety goals—to reduce the number of recreational boating fatalities, reduce the rate of passenger vessel fatalities, and increase the number of mariners reported in imminent danger who are rescued. Next year, DOT will simplify the passenger fatality measure because the current measure has been difficult to understand. DOT indicated that it expects to meet all three goals in 2001. The report acknowledges that the data on recreational boating fatalities are probably underreported by at least 6 percent due to the need for interpretation of what constitutes a recreational boating fatality at the state level. In addition, in 2001, DOT will switch data sources for passenger vessel fatalities. The report indicates that the new data source— the Marine Information System for Safety and Law Enforcement—will be a significant improvement, but the improved data quality may cause serious difficulties in making comparisons to prior data. The agency’s strategies for achieving its marine safety goals in 2002 appear reasonable. The strategies include increasing staffing and training at rescue stations and command centers and promoting the wearing of lifejackets by recreational boaters. In 2000, DOT’s evaluation of its Recreational Boating Safety Program concluded that wearing personal flotation devices could save the lives of about 500 boaters each year. In addition, the agency intends to work with the U.S. Navy and Air Force, which also have search and rescue responsibilities, primarily for their own vessels and aircraft. The Air Force is the lead agency for land-based search and rescue, and the Coast Guard is the lead for maritime search and rescue. Despite one of the deadliest pipeline accidents in recent years, DOT projects that it met both of its goals for hazardous material transport safety—reducing pipeline failures and serious hazardous material transport incidents—because neither goal measures the increasing number of fatalities. DOT statistics show that pipeline fatalities have been increasing steadily over the past years (See fig. 2.). However, we recognize that the overall number of fatalities from pipeline accidents remains low, and that a single major accident can result in a significant increase in the number of annual fatalities. Preliminary estimates also show that DOT will meet its performance goal for reducing serious hazardous materials incidents in 2000. However, the fact that there were more incidents in 2000 than there were in 1999 will make it challenging for DOT to meet its 2001 goal. DOT acknowledged that the number of pipeline failures is likely underreported, and federal, state, and industry teams have been formed to improve the data. In addition, DOT reported that it is revising the collection and processing of pipeline accident data to improve the consistency and accuracy of the data on accident causes. To reduce the risk of pipeline failures, DOT works to establish safety regulations and ensure compliance. However, we have expressed concerns about certain agency initiatives to improve pipeline safety. For example, DOT has changed its approach to enforcing compliance with its regulations by reducing its use of fines and, instead, working with pipeline operators to identify and correct safety problems. We recommended that DOT determine whether this approach has improved compliance with pipeline safety regulations. A DOT official said that the Department is currently evaluating the effectiveness of its approach of working with pipeline operators to improve compliance. DOT reported limited success in achieving its rail and transit safety performance goals in 2000, when it met only two of five goals. (See table 3.) For 2001, the report indicated that the agency expects to meet only one of these goals. The rate of highway-rail crossing accidents fell in 2000, but DOT did not meet its goal. According to DOT, this goal was not met due, in part, to a 15-percent increase in highway-rail crossing accidents on private property, which the agency has limited authority or control over. DOT is attempting to improve rail safety through its educational outreach program and Safety Assurance and Compliance Program, in which the agency works with major railroads to identify and solve systemic problems affecting rail safety. To improve transit safety, the Federal Transit Authority provides grants to states to improve public transit infrastructure and works with states, local transit authorities, and the transit industry to develop technology, provide training, and supply technical assistance that advances safety. The agency reported that it did not meet its goal of reducing the number of flight delays as aviation delays and cancellations continued to increase in 2000 to the highest levels recorded, when nearly one in four flights were delayed, cancelled, or diverted. DOT indicated that bad weather accounted for about 70 percent of the delays. DOT had met its goal for reducing delays in 1999 but indicated that it would not meet its 2000 goal if the country experienced particularly bad weather in 2000. Similarly, DOT reported that it does not expect to achieve this goal in 2001 due to expected increases in air travel. DOT provides a clear and comprehensive discussion of the performance data. The performance report provides a definition of the measure, data limitations and their implications for assessing performance, procedures to verify and validate data, and the source database. For example, the report indicates that the lack of a common definition of delay has led to confusion and disagreement as to the extent of aviation delays. To address this problem, DOT formed a task force that recommended four new categories for the causes of flight delays: (1) circumstances within an airline’s control, (2) extreme weather, (3) circumstances within the national aviation system, and (4) late flight arrivals. DOT expects to test this new reporting format with airlines before formally implementing it. DOT’s strategies to reduce flight delays, such as improved weather tracking and reporting mechanisms, appear reasonable but do not appear likely to achieve the goal in the near term. For example, the report indicates that FAA’s best means for reducing aviation delays, such as its plans for all-weather access to runways and the construction of more runways, will happen only in the long term. In the near term, to meet the growing demand for air travel and decrease the number of flight delays, the agency is modernizing its air traffic control (ATC) system by acquiring a network of radar, automated information processing, navigation, and communications equipment. However, over the years, the agency’s ATC modernization and other major capacity-enhancing programs have not met expectations due to cost, schedule, and performance problems. We designated the ATC modernization program as a high-risk information technology initiative in 1995, and it continues to be a high-risk area. According to projections, DOT reported that it failed to achieve its 2000 goal to reduce highway congestion—measured by hours of delay per 1,000 vehicle-miles traveled on federal-aid highways—despite establishing a positive trend in 1999. However, in 2001, the agency expects to meet three new highway congestion goals—reducing congested travel time, peak period travel time, and traveler delay—that will replace the 2000 goal. DOT did not indicate why the agency failed to meet the 2000 goal. In addition, DOT estimated that it met its 2000 goal for improving highway pavement condition. Consequently, it plans to continue a number of initiatives designed to promote the construction of smoother pavements and extend pavement performance, such as providing funding for pavement maintenance and conducting pavement research. DOT also reported that it met its goal for increasing the number of metropolitan areas that have intelligent transportation systems, which use information and communication technology to extend the capacity of existing highway infrastructure and could help lessen congestion. The Department intends to fund intelligent transportation systems in additional metropolitan areas in 2002 as part of its strategy to reduce highway congestion. DOT’s report discussed the credibility of the information for each goal and added a limitation that we identified in last year’s report. We found that the 1999 performance report failed to explain that states provided no information for the highway pavement condition of about 7 percent of the miles on the National Highway System. This limitation to the data is explained in the 2000 report. However, the 2000 report did not address other problems with the pavement condition data that we identified, such as the fact that states vary in their approaches to measuring and reporting the statistic used to indicate pavement performance and do not uniformly follow DOT’s guidance for making these measurements. The Coast Guard is responsible, along with other federal agencies, for reducing the amount of illegal drugs smuggled into the United States. Although the agency seized a record 60.2 metric tons of cocaine in 2000, this was not sufficient for the agency to reach its goal—to seize 13 percent of the cocaine being smuggled into the country through maritime routes. Furthermore, the percentage of cocaine seized by the agency fell from 12.2 percent in 1999 to 10.6 percent in 2000 because of increases in the overall amount of cocaine smuggled. Consequently, the agency noted that it will be challenged to meet its goal in 2001. The data quality for this indicator is not as strong as it is for other indicators. The report notes that the secretive nature of the illegal drug trade could cause estimates of the amount of cocaine smuggled into this country to contain significant errors. The Office of National Drug Control Policy attempts to refine and improve this estimate each year, according to DOT. DOT provided a general overview of its strategies for 2002 to reduce the availability of illegal drugs, which included continuing to develop new tactics and vary its operations to thwart maritime smuggling. DOT reported that the Coast Guard’s efforts are part of an overall strategy to reduce the illegal drug supply entering the United Sates. The multiagency effort is coordinated by the Office of National Drug Control Policy; the Coast Guard’s role in the effort is to serve as the lead federal agency for maritime drug interdiction. For the selected key outcomes, this section describes major improvements or remaining weaknesses in DOT’s (1) fiscal year 2000 performance report in comparison with its fiscal year 1999 report, and (2) fiscal year 2002 performance plan in comparison with its fiscal year 2001 plan. It also discusses the degree to which the agency’s fiscal year 2000 report and fiscal year 2002 plan address concerns and recommendations by the Congress, GAO, DOT’s Inspector General, and others. As we found last year, DOT provides a clear and comprehensive discussion of performance goals, measures, and data in this year’s report. Overall, it is easy to ascertain DOT’s progress in meeting its goals because the performance information is clearly articulated in the performance report, which provides goal levels and the actual performance for all the measures we reviewed. This represents an improvement over last year’s performance report, for which we noted that DOT did not report actual performance for two goals: highway delays and pavement condition. For 1999, the agency reported that actual data were not available. For 2000, the agency estimated the level of performance. As was true last year, DOT’s goals and measures are meaningful, outcome oriented, objective, measurable, and quantifiable. In addition, summary tables list the fiscal year 2000 goals, trend data, and checkmarks to indicate goals that were met. For each performance measure, the agency provides a definition of the measure, data limitations and their implications for assessing performance, procedures for verifying and validating data, and the source for the data. The report also notes the agency’s attempts to improve the quality of its goals. For example, in 2000, DOT deleted the maritime safety goal to reduce the fatality rate among maritime workers and added the goal to reduce the fatality rate on passenger vessels. The agency reported that it made the change because it believed the new goal reflected a broader area of safety performance. In 2001, the agency made additional improvements; it simplified the passenger fatality measure to make it more understandable and implemented a new information system to improve the quality of its passenger fatality data. Regarding the highway pavement condition data, however, the report does not address several weaknesses we raised concerning last year’s report, such as the fact that states vary in their approaches to measuring and reporting the statistic used to indicate pavement performance and do not uniformly follow DOT’s guidance for making these measurements. DOT changed several performance measures for this year’s report. For example, it implemented a recommendation made by GAO and others to link its fatal commercial aviation accident measure to the number of departures instead of flight hours. However, the report did not always discuss its reasons for discontinuing or changing its performance measures, which sometimes occurred when performance was declining. For example, DOT discontinued its aviation safety performance goal of reducing the rate of deviations—aircraft entering airspace without prior coordination—but the report did not indicate why. DOT did not make its 1999 goal, and pilot deviations increased by 38 percent in 2000. As was true in 1999, the 2000 performance report does not always explain why DOT did not reach its performance goals or how its plans to mitigate the external factors that affect outcomes. For example, DOT indicated that it failed to meet its goals for reducing highway fatalities in part due to a continuing increase in motorcycle fatalities. However, DOT’s plan does not include any specific strategies for reducing motorcycle fatalities. As it did last year, the 2000 performance report addressed the majority of the management challenges identified by GAO, the Inspector General, and the Office of Management and Budget (OMB) by including a discussion of each management challenge in the section devoted to the relevant performance indicator. The management challenges not directly or indirectly linked to one of DOT’s goals were listed together after the goals. Although the report discussed DOT’s progress toward resolving most of the management challenges, it did not address aspects of the management challenge to enhance competition in the freight rail industry and consumer protection in the aviation and freight rail industries. DOT improved its discussion of the management challenges by indicating whether GAO, the DOT Inspector General, OMB, or some combination of these organizations identified each challenge. As it did last year, DOT identifies in a clear, well-organized manner the strategies and initiatives it will pursue for each of the 2002 performance goals. For the four outcomes that we reviewed, these strategies did not include strategic human capital management and information technology. However, the report discusses these types of strategies elsewhere. The report also provides the fiscal years 2001 and 2002 funding that DOT will direct toward each performance goal. This information is an improvement to last year’s report because it enhances the reader’s ability to compare the agency’s initiatives and the resources committed to those initiatives. The plan outlines DOT’s expectation for meeting its 2001 goals, and in several cases the plan indicates that the agency does not expect to meet its goals. This is especially true for the numerous goals that it did not meet in 2000. For example, in the short term, DOT may not be able to meet its goal for reducing the rate of runway incursions. The rate of runway incursions has increased over the last 6 years, with a very large upsurge in 2000, moving the rate of incursions even further from DOT’s goal. As a result, for the second straight year, DOT’s performance plan indicated that it is unlikely that the agency will meet its goal for the upcoming year. The plan also notes DOT’s continuing efforts to correct data limitations. For example, DOT found that its performance measure for highway congestion (delay per 1,000 vehicle-miles traveled) did not reflect the actual performance of the highway system in places where congestion regularly happens (e.g., congested, urban areas), and the measure was difficult for the public to understand. As a result, next year DOT will replace the one highway congestion performance measure with three new measures that DOT believes will reflect changing travel conditions more comprehensively by focusing on the different aspects of inefficient road performance in areas where congestion regularly occurs. The plan did not include a future performance goal for reducing pipeline accident injuries and fatalities even though one would appear relevant for several reasons. First, fatalities due to pipeline accidents are increasing. Second, the plan includes goals to reduce injuries and fatalities related to all of the other transportation sectors; and, third, DOT’s future plans for pipeline safety are geared, in part, toward reducing fatalities. As it did last year, in its discussion at the end of the relevant performance goal, DOT’s 2002 performance plan addressed the majority of the management challenges identified by GAO, the Inspector General, and OMB. For some of the management challenges, DOT breaks out its plans and goals by fiscal year. For example, in response to its management challenge for computer security, DOT lays out a number of milestones for improving computer security with specific dates leading up to May 2003, when all DOT systems are expected to be adequately protected. However, the plan acknowledged that DOT did not meet the first milestone on schedule. GAO has identified two governmentwide high-risk areas: strategic human capital management and information security. We found that DOT’s performance plan had a measure related to human capital but no corresponding goal. In addition, the agency’s performance report explained DOT’s progress in resolving certain human capital challenges. For example, the report indicated that DOT made progress in implementing its human resources management strategies in 2000 and has established worker satisfaction as a new performance measure in 2002. With respect to information security, we found that DOT’s performance plan did have goals and measures related to information security, and the agency’s performance report explained its progress in resolving a number of its information security challenges. For example, the report focused on FAA’s air traffic control information systems in response to a management challenge raised by GAO, the Inspector General, and OMB. In addition to the governmentwide challenges, GAO has identified six major management challenges facing DOT. We found that DOT’s performance report discussed the agency’s progress in resolving many of these challenges. However, the report did not adequately discuss the agency’s progress in enhancing competition in the freight rail industry and consumer protection in both the freight rail and aviation industries. Table 4 illustrates how the goals and measures address the eight management challenges that GAO identified. DOT has again produced a superior combined performance plan and report that is clear, understandable, and well organized. It is easy for the reader to quickly assess the agency’s progress in achieving key goals and understand its plans for the future, including the budget resources that DOT plans to direct toward achieving each goal. For readers seeking additional information, the report also provides details on the agency’s data for each performance measure, such as its definition, source, limitations, and DOT’s efforts to verify and validate the data. These characteristics make DOT’s report and plan a good model for other agencies to emulate. However, the report shows that DOT is generally achieving fewer of its key performance goals than last year. For example, in 2000, DOT failed to meet any of its goals for highway safety, including its goals for reducing the rate of highway fatalities and injuries that it had met in 1999. The plan is candid about the agency’s likelihood of not achieving some future goals. Some of this is understandable; DOT cannot mitigate all external factors, such as bad weather or human error. However, in several cases, DOT’s annual strategies and goals appear mismatched. Overall, DOT met fewer of its performance goals than last year, and it already projects that it will fall short on several future goals in important areas, such as aviation, highway, and transit safety. For goals in which DOT is not making adequate progress, the Department needs to either change its strategies or lower its expectations. In the case of pipeline safety, the mismatch between goals and strategies appears to be due to the lack of an appropriate measure. Specifically, DOT does not have performance goals to reduce pipeline fatalities and injuries, even though pipeline fatalities are increasing and the agency’s strategies to improve pipeline safety are geared toward reducing fatalities. In the other areas—highway, aviation, transit, and marine transportation—DOT has safety goals to reduce fatalities and injuries. We recommend that the Secretary of Transportation direct the operating administrations and the Office of Budget and Program Performance to improve the match between annual performance goals and strategies in the following ways: Change strategies so that they help DOT achieve its performance goals or lower performance goals to more achievable levels. In the latter case, the Department should provide a justification for why the changes were necessary, clearly note that the goals have been lowered, and identify any long-term strategies that will bring performance into better alignment with expectations. Establish performance goals for reducing the number of fatalities and injuries caused by pipeline failures to match similar goals in the aviation, highway, transit, and marine sectors. As agreed, our evaluation was generally based on the requirements of GPRA; the Reports Consolidation Act of 2000; guidance to agencies from OMB for developing performance plans and reports (OMB Circular A-11, Part 2); previous reports and evaluations by us and others; our knowledge of DOT’s operations and programs; our identification of best practices concerning performance planning and reporting; and our observations on DOT’s other GPRA-related efforts. We also discussed our review with DOT officials. The agency outcomes that were used as the basis for our review were identified by the Ranking Minority Member of the Senate Governmental Affairs Committee as important mission areas for the agency and do not reflect the outcomes for all of DOT’s programs or activities. The major management challenges confronting DOT, including the governmentwide high-risk areas of strategic human capital management and information security, were identified by GAO in our January 2001 Performance and Accountability Series and High-Risk Update and were identified by DOT’s Office of Inspector General in December 2000. We did not independently verify the information contained in the performance report and plan, although we did draw from other GAO work in assessing the validity, reliability, and timeliness of DOT’s performance data. We conducted our review from April 2001 through June 2001 in accordance with generally accepted government auditing standards. We provided copies of a draft of this report to the Department of Transportation for its review and comment. In a letter, the agency indicated that it agrees with our recommendations. Specifically, the Department stated that annual performance targets should be realistic, strategies will be reviewed to better align them with performance goals, and pipeline injuries and fatalities should be included in DOT’s future performance plans. DOT’s written comments are included as appendix II. As arranged with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after the date of this letter. At that time, we will send copies to appropriate congressional committees; the Secretary of Transportation; and the Director, Office of Management and Budget. Copies will also be made available to others upon request. If you or your staff have any questions, please call me at (202) 512-2834. Key contributors to this report were Teresa Spisak and Keith Cunningham. The following table identifies the major management challenges confronting the Department of Transportation (DOT), which include the governmentwide high-risk areas of strategic human capital management and information security. The first column lists the management challenges that we and/or DOT’s Inspector General (IG) have identified. The second column discusses what progress, as discussed in DOT’s fiscal year 2000 performance report, the agency made in resolving its challenges. The third column discusses the extent to which DOT’s fiscal year 2002 performance plan includes performance goals and measures to address the challenges that we and DOT’s IG identified. We found that DOT’s performance report discussed the agency’s progress in resolving all of its challenges. Of the agency’s nine major management challenges, its performance plan (1) had goals and measures that were directly related to 5 of the challenges and (2) had goals and measures that were indirectly applicable to four of the challenges.
The Government Performance and Results Act of 1993 requires agencies to produce annual performance reports. GAO reviewed the Department of Transportation's (DOT) performance reports for fiscal years 2000 and 2002 to assess its progress in achieving selected key outcomes in important mission areas. This report (1) assesses the progress DOT has made in accomplishing these outcomes and the strategies the agency has in place to achieve them and (2) compares DOT's fiscal year 2000 performance report and fiscal year 2002 performance plan with the agency's prior year performance report and plan for these outcomes. DOT's consolidated performance report makes it clear that DOT achieved only limited progress in fiscal year 2000 toward achieving the selected outcomes and that the agency directly indicated that its current strategies are not likely to result in achievement of the goals. DOT provided a clear, well-organized discussion of performance goals, measures, and data in both fiscal year 2000 and fiscal year 2002 performance plans.
CPSC was created in 1972 under the Consumer Product Safety Act (P.L. 92-573) to regulate consumer products that pose an unreasonable risk of injury, to assist consumers in using products safely, and to promote research and investigation into product-related deaths, injuries, and illnesses. CPSC currently has three commissioners, who are responsible for establishing agency policy. One of these commissioners is designated as the chairman, who directs all the executive and administrative functions of the agency. In fiscal year 1997, CPSC carried out its broad mission with a budget of about $42.5 million and a full-time-equivalent staff of 480. After adjusting for inflation, the agency’s budget has decreased by about 60 percent since 1974. Similarly, CPSC’s current staffing level represents 43 percent fewer positions as compared with the agency’s 1974 staff. voluntary product safety standards. CPSC also addresses product hazards by providing information to consumers on safety practices that can help prevent product-related accidents. In addition to its own efforts to disseminate information, CPSC provides considerable amounts of information in response to requests from the public. CPSC’s resource base and extensive jurisdiction require the agency to select among potential product hazards. New agency initiatives may come to CPSC in several ways. First, any person may file a petition requesting CPSC to issue, amend, or revoke a regulation. For example, CPSC’s cigarette lighter project, which resulted in a new mandatory safety standard, originated with a petition from an emergency room nurse. Second, CPSC can receive a product hazard project from the Congress. The Congress may require CPSC to study a wide-ranging product area (such as indoor air quality) or impose a specific regulation (such as a mandatory safety standard for garage door openers). Third, CPSC commissioners and agency staff can initiate projects or suggest areas to address. CPSC has wide latitude over which potential product hazards it targets for regulatory and nonregulatory action. Although the agency has little or no discretion over projects mandated by the Congress, it can accept or reject suggestions submitted by petition or proposed by agency staff. Of the 115 projects the agency worked on from January 1, 1990, to September 30, 1996, 59 percent were initiated by CPSC, 30 percent originated from a petition, and about 11 percent resulted from congressional directives. the frequency of injuries and deaths resulting from the hazard; the severity of the injuries resulting from the hazard; addressability—that is, the extent to which the hazard is likely to be reduced through CPSC action—agency regulations note that the cause of the hazard should be analyzed to help determine the extent to which injuries can reasonably be expected to be reduced or eliminated through CPSC action; the number of chronic illnesses and future injuries predicted to result from the hazard; preliminary estimates of the costs and benefits to society resulting from CPSC action; the unforeseen nature of the risk—that is, the degree to which consumers are aware of the hazard and its consequences; the vulnerability of the population at risk—whether some individuals (such as children) may be less able to recognize or escape from potential hazards and therefore may require a relatively higher degree of protection; the probability of exposure to the product hazard—that is, the number of consumers exposed to the potential hazard, or how likely it is that typical consumers would be exposed to the hazard; and additional criteria to be considered at the discretion of CPSC. Commissioners and staff may select projects on the basis of what they believe are the most important factors. For example, the regulations do not specify whether any criterion should be given more weight than the others, nor that all criteria must be applied to every potential project. Our interviews with present and former commissioners and our review of CPSC briefing packages showed that three criteria—the number of deaths and injuries, the cause of injuries, and the vulnerability of the population at risk—were more strongly emphasized than the others. However, although the commissioners and former commissioners we interviewed generally agreed about which criteria they emphasized for project selection, they expressed very different views on how some of these criteria should be interpreted. For example, their opinions differed about choosing projects on the basis of the cause of injuries. A major issue in this regard concerned the appropriate level of protection the agency should be responsible for providing when a product hazard results, at least in part, from consumer behavior. Some current and former commissioners argued that no intervention was warranted when consumer behavior contributed to injuries; others were more willing to consider a regulatory approach in these situations. Although CPSC conducts a number of projects annually, staff were unable to give us a comprehensive list of projects the agency had worked on in the 6-year period we examined. CPSC was also unable to verify the completeness of the project list that we compiled from agency documents and interviews with staff. According to CPSC staff, internal management systems do not generally contain this information because most projects are accounted for under either broad codes such as “children’s products” or activity codes such as “investigations,” “product safety assessment,” and “emerging problems.” In addition, CPSC staff told us that reliable inferences about the characteristics of individual projects, their outcomes, and the resources spent on them cannot be drawn from management information systems because of limitations in the computer system and because no consistent rule exists about how staff time in different directorates is recorded to project codes. Without systematic and comprehensive information on its past efforts, CPSC cannot fully assess whether its projects overrepresent some hazard areas and therefore agency resources might be more efficiently employed. In our report, we recommend that the Chairman of CPSC direct agency staff to develop and implement a project management tracking system to compile information on current agency projects. CPSC has developed a patchwork of independent data systems to provide information on deaths and injuries associated with consumer products. To estimate the number of injuries associated with specific consumer products, CPSC gathers information from the emergency room records of a nationally representative sample of 101 hospitals. CPSC also obtains information on fatalities by purchasing a selected group of death certificates from the states. Because neither emergency room nor death certificate data provide detailed information on hazard patterns or causes of injuries, CPSC also investigates selected incidents to obtain more detailed information. CPSC’s data give the agency only limited assistance in applying its project selection criteria. Data on all CPSC’s project selection criteria suffer from major limitations, as shown in table 1. In fact, none of the criteria are supported by complete data that are available for most projects at the time the project is selected. CPSC staff identified four data-gathering areas as key concerns: (1) lack of data on injuries treated in physicians’ offices and other settings outside the emergency room; (2) lack of data that would identify chronic illnesses that may be associated with consumer products; (3) sketchy information about accident victims, which limits the ability to assess which hazards disproportionately affect vulnerable populations; and (4) lack of data on exposure to consumer products. information is needed not only because it is a criterion for project selection but also because it is important in evaluating the success of CPSC’s injury reduction efforts and determining the need for possible follow-up actions. According to CPSC staff, identifying chronic illnesses associated with consumer products is nearly impossible with CPSC’s current data. CPSC staff stated that little is known about many chronic illness hazards that may be associated with potentially dangerous substances, and even less information is available about which consumer products may contain these ingredients. Chronic illnesses are likely to be especially underestimated in CPSC’s emergency room data, because they are underrepresented among emergency room visits and because product involvement is more difficult to ascertain. Similarly, consumer product involvement is seldom recorded on death certificates in the case of chronic illnesses. Sketchy information about accident victims also limits CPSC’s ability to assess which consumer product hazards have a disproportionate impact on vulnerable populations. CPSC’s surveillance data systems provide information only on the age of the victim; no systematic or comprehensive information is available to determine whether a given hazard has a special impact on other vulnerable populations such as people with disabilities. A former commissioner told us that the lack of other demographic information (such as race, income, and disability status) made it difficult to know which subpopulations were predominantly affected by a particular hazard. Another commissioner echoed this concern, adding that such information would be useful in targeting public information campaigns on certain hazards to those groups that need the information most. testing of the product, and recreations of the incident. As with exposure data, these investigations are not conducted for every project and are done only after a project has been established. Thus, assessment of causation at the project selection stage is unavoidably speculative. We believe that improved information on each of these four areas is necessary for CPSC to make informed decisions on potential agency projects. However, we also recognize that such information may be costly to obtain. In our report, we recommend that the Chairman of CPSC consult with experts both within and outside the agency to prioritize CPSC’s needs for additional data, investigate the feasibility and cost of alternative means of obtaining these data, and design systems to collect and analyze this information. CPSC uses two analytical tools—risk assessment and cost-benefit analysis—to assist in making decisions on regulatory and nonregulatory methods to address potential hazards. Risk assessment involves estimating the likelihood of an adverse event, such as injury or death. Cost-benefit analysis details and compares the expected effects of a proposed regulation or policy, including both the positive results (benefits) and the negative consequences (costs). The Congress requires CPSC to perform cost-benefit analyses before issuing certain regulations, and CPSC has conducted cost-benefit analyses for these regulations and in other situations in which such an analysis was not required by law. Because most of the agency’s projects do not involve regulation, relatively few CPSC projects conducted between January 1, 1990, and September 30, 1996, were subject to these requirements. We identified 8 cost-benefit analyses that CPSC performed in accordance with these requirements and an additional 21 analyses that it conducted when it was not required. Before issuing certain regulations, CPSC is required to consider the degree and nature of the risk of injury the regulation is designed to eliminate or reduce. However, CPSC usually does not conduct a formal, numerical risk assessment before issuing a regulation, and the law does not require it to do so. We determined that CPSC conducted 24 risk assessments between January 1, 1990, and September 30, 1996; only 4 of these were associated with regulatory action. demands for information posed by risk assessment and cost-benefit analysis. As a result, the agency’s estimates of risks, costs, and benefits are less accurate because they reflect the substantial limitations of the underlying data. For example, because CPSC’s data undercount the deaths and injuries associated with particular consumer products, estimates of risk—and the potential benefits of reducing that risk—appear smaller than they actually are. However, CPSC’s data provide information only on whether a product was involved in an accident, not whether the product caused the accident. This can sometimes make the risks assessed by CPSC—and the benefits of reducing those risks—appear greater. The methodology used to conduct a cost-benefit analysis frequently depends on the circumstances and the context of the analysis. For this reason, there is no complete set of standards for evaluating the quality of an individual cost-benefit analysis. However, the professional literature offers some guidance for analysts, and certain specific elements are frequently used to determine whether a given analysis meets a minimum threshold of comprehensiveness and openness. For example, analysts generally agree that all methodological choices and assumptions should be detailed, all limitations pertaining to the data should be revealed, and measures of uncertainty should be provided to allow the reader to take into account the precision of the underlying data. Similarly, practitioners generally call for sensitivity analysis, which enables the reader to determine which assumptions, values, and parameters of the cost-benefit analysis are most important to the conclusions. the time. Furthermore, some of CPSC’s data sets have a known upward or downward bias because of the way the data were constructed. For example, when estimates of incidents are based only on investigated or reported cases, two potential biases are likely to be introduced into the analysis: (1) the estimates are likely to be biased downward by nonreporting and (2) the incidents reported tend to be the more severe ones. In only 53 percent of applicable cases did CPSC’s analysis inform the reader of known limitations inherent in the data being used for cost-benefit analysis. fewer precautions in response to a change in a product’s safety features. For example, in establishing a standard for child-resistant packaging that was also “senior-friendly,” CPSC considered that because child-resistant medicine bottles can be difficult to open, a grandparent might leave the cover off the bottle, creating an even greater risk than would exist with the original cap. Although CPSC considered such factors in some cases, only 49 percent of its analyses reflected potential risk-risk trade-offs. CPSC has not established internal procedures that require analysts to conduct comprehensive analyses and report them in sufficient detail. For example, according to CPSC staff, the agency has little written guidance about what factors should be included in cost-benefit analyses, what methodology should be used to incorporate these factors, and how the results should be presented. Staff also told us that CPSC analyses are not generally subject to external peer review. Such reviews can serve as an important mechanism for enhancing the quality and credibility of the analyses that are used to help make key agency decisions. In our report, we recommend that the Chairman direct agency staff to develop and implement procedures to ensure that all cost-benefit analyses performed on behalf of CPSC are comprehensive and reported in sufficient detail, including providing measures of precision for underlying data, incorporating information on all important costs and benefits, and performing sensitivity analysis. To help minimize the possibility that a product might be unfairly disparaged, in section 6(b) of the Consumer Product Safety Act, the Congress imposed restrictions on CPSC’s disclosure of manufacturer-specific information. Before CPSC can release any information that identifies a manufacturer, it must take “reasonable steps” to verify the accuracy of the information and to ensure that disclosure is fair; notify the manufacturer that the information is subject to release; and give the manufacturer an opportunity to comment on the information. These restrictions apply not only to information the agency issues on its own—such as a press release—but also to information disclosed in response to a request under the Freedom of Information Act. Section 6(b) also requires CPSC to establish procedures to ensure that releases of information that reflect on the safety of a consumer product or class of products are accurate and not misleading, regardless of whether the information disclosed identifies a specific manufacturer. In implementing section 6(b), CPSC established several procedures designed to ensure compliance with these statutory requirements. These include obtaining written verification from individuals of the information they report to the agency, notifying manufacturers by certified mail when manufacturer-specific information has been requested, and giving manufacturers the option to have their comments published with any information disclosed. For example, CPSC has issued clearance procedures for situations when commissioners and staff initiate public disclosures—for example, when CPSC publishes the results of agency research. Under CPSC’s guidelines, each assistant or associate executive director whose area of responsibility is involved must review the information and indicate approval for the release in writing. After all other reviews have been completed, the Office of the General Counsel must also review and approve the release. Information from three sources—industry sources, published legal cases, and data on retractions—suggests that CPSC complies with its statutory requirements concerning information release. Industry sources, even those otherwise critical of the agency, told us that CPSC generally keeps proprietary information confidential as required by law. Our review of published legal decisions found no rulings that CPSC violated its statutory requirements concerning the release of information. Retractions by CPSC are also rare—only three retractions have been issued by CPSC since the agency was established. suggested possible changes. Although these individuals raised issues about the extent of the protection afforded to manufacturers and the resources necessary to ensure compliance, we did not assess whether the specific suggestions were necessary or feasible. CPSC’s chairman, other CPSC officials, former commissioners, and the representative of a consumer advocacy group stated that compliance with 6(b) is costly for CPSC and delays the agency in getting information out to the public. To reduce the burden of complying with these requirements, CPSC staff have suggested that the notification requirement that gives manufacturers 20 days in which to comment should apply only to the first time information is released and that, instead of requiring CPSC to verify information from consumer complaints, the agency should be allowed to issue such information with an explicit disclaimer that CPSC has not verified the consumer’s report. Instead of reducing CPSC’s verification requirements, some industry representatives suggested expanding them. These manufacturers stated that before CPSC releases incident information, the agency should substantiate it, rather than relying on a consumer’s testimony. Industry representatives stated—and CPSC staff confirmed—that many of the requests for CPSC information come from attorneys for plaintiffs in product liability suits. As a result, some industry representatives expressed concern that unsubstantiated consumer complaints could be used against them in product liability litigation. They suggested that 6(b) should require CPSC to substantiate all incident reports by investigating them before they can be disclosed, instead of merely checking with the consumer. However, CPSC officials told us that, because of limited resources, investigations—which are time consuming and costly—can be conducted only on a small proportion of specially selected cases. association said they were trying to work out a more satisfactory arrangement. Mr. Chairman, that concludes my prepared statement. I would be happy to answer any questions you or Members of the Subcommittee might have. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
GAO discussed the Consumer Product Safety Commission's (CPSC) procedures to protect consumers from unreasonable risk of injuries, focusing on CPSC's project selection, use of cost-benefit analysis and risk assessment, and information release procedures. GAO noted that: (1) although CPSC has established criteria to help select new projects, with the agency's current data, these criteria can be measured only imprecisely if at all; (2) CPSC has described itself as "data driven," but its information on product-related deaths and injuries is often sketchy, and its lack of systematized descriptive information on past or ongoing projects makes it more difficult for agency management to monitor current projects and to assess and prioritize the need for new projects in different hazard areas; (3) CPSC's data are often insufficient to support rigorous application of risk assessment and cost-benefit analysis; (4) in addition, the cost-benefit analyses conducted by CPSC between 1990 and 1996 were frequently not comprehensive, and the reports on these analyses were not sufficiently detailed; (5) CPSC has established procedures to implement statutory requirements restricting the release of manufacturer-specific information; and (6) although industry representatives, consumer advocates, and CPSC expressed differing views on the merits of these restrictions, available evidence suggests that CPSC complies with these statutory requirements.
In 1972, Congress passed FACA in response to a concern that federal advisory committees were proliferating without adequate review, oversight, or accountability. Although Congress recognized the value of advisory committees to public policymaking, it included measures in FACA intended to ensure that (1) valid needs exist for establishing and continuing advisory committees, (2) the committees are properly managed and their proceedings are as open as feasible to the public, and (3) Congress is kept informed of the committees’ activities. Under FACA, the President, the Director of OMB, and agency heads are to control the number, operations, and costs of advisory committees. To help accomplish these objectives, FACA directed that a Committee Management Secretariat be established at OMB to be responsible for all matters relating to advisory committees. In 1977, the president transferred advisory committee functions from OMB to GSA. The president also delegated to GSA all of the functions vested in the president by FACA, except that the annual report to Congress required by the act was to be prepared by GSA for the president’s consideration and transmittal to Congress. GSA, through its Committee Management Secretariat, is responsible for prescribing administrative guidelines and management controls applicable to advisory committees governmentwide. It also has other responsibilities, including certain oversight responsibilities, such as consulting with agencies on establishing advisory committees and conducting comprehensive reviews of advisory committees. To fulfill its responsibilities, GSA has developed regulations and other guidance to assist agencies in implementing FACA, has provided training to agency officials, and was instrumental in creating and has collaborated with the Interagency Committee on Federal Advisory Committee Management. FACA assigns agency heads responsibility for issuing administrative guidelines and management controls applicable for their advisory committees. FACA and GSA regulations assign them additional responsibilities for their advisory committees. For example, agency heads are responsible for (1) appointing a designated federal officer for each committee to oversee the committee’s activities, (2) reviewing annually the need to continue existing committees, (3) ensuring that meetings are held at reasonable times and places, (4) ensuring that members of the public are permitted to file written statements with the committees and are allowed to speak to the committees if agency guidelines permit, and (5) reviewing committee members’ compliance with conflict-of-interest statutes. FACA also calls for agency heads to designate a committee management officer to whom the agency head frequently delegates these responsibilities. In February 1993, the President issued Executive Order 12838, which directed agencies to reduce by at least one-third the number of discretionary advisory committees by the end of fiscal year 1993. Discretionary committees are those created under agency authority or authorized—but not mandated—by Congress. OMB, in providing guidance to agencies on the executive order, established a maximum ceiling number of discretionary advisory committees for each agency and a monitoring plan. Under the guidance, agencies were to annually submit committee management plans to OMB and GSA. The number of advisory committees grew from 1,020 in fiscal year 1988 to 1,305 in fiscal year 1993. The number then declined over the next several years to 963 advisory committees in fiscal year 1997. This decrease occurred after the President’s February 1993 executive order to reduce the number of discretionary committees. A total of 36,586 individuals served as members of the 963 committees in fiscal year 1997, and GSA reported that the cost to operate the 963 committees in that year was about $178 million. FACA permits agencies to compensate nonfederal committee members for their services; and according to GSA data, agencies paid about $14 million in fiscal year 1997 for such services. Advisory committee members are to be reimbursed for their travel, lodging, and meals. The single largest cost in fiscal year 1997—about $81 million of the $178 million—represented the value of compensation paid to federal employees for the time they spent assisting and monitoring advisory committees. Although the number of advisory committees has decreased, the average number of members per committee and the average cost per committee have increased. On average, between fiscal years 1988 and 1997, the number of members per advisory committee increased from about 21 to 38, and the cost per advisory committee increased from $90,816 to $184,868. In constant 1988 dollars, the average cost per advisory committee increased from $90,816 to $140,870 over the same period. For each advisory committee member to whom we sent a questionnaire, we identified an advisory committee to which the member belonged and instructed the member to use that committee in answering our questions. The committee we identified was the only federal advisory committee of which most respondents said they were members. Respondents had served as members on these committees for various periods. About 28 percent had served 1 year or less, 54 percent had served between 1 and 4 years, and 18 percent had served over 4 years. The answers the committee members gave to our survey showed that generally they believed their committees had worthwhile purposes, that the advice and recommendations that the committees gave were consistent with those purposes, and that the advice and recommendations were balanced and independent. In addition, they generally believed that the agencies to which their committees reported sought advice and recommendations from the committees and used the advice or recommendations after receiving them. Specifically: About 94 percent of the respondents generally or strongly agreed that the committees they were affiliated with had clearly defined purposes, and 96 percent generally or strongly agreed that the committees’ purposes were worthwhile. Ninety-four percent of the respondents generally or strongly agreed that the advice or recommendations made by their committees were consistent with the committees’ purposes. About 90 percent of the respondents generally or strongly agreed that committee membership was fairly balanced in terms of the points of view represented, and 85 percent generally or strongly agreed that their committees included a representative cross-section of those directly interested in and affected by the issues discussed by the committees. About 79 to 82 percent of the respondents said they were provided to a great or very great extent with the necessary preparatory materials prior to (1) committee meetings, (2) discussing issues, and (3) deciding on issues. Another 11 to 13 percent said they had been provided the necessary preparatory material to a moderate extent. The percentage of general advisory committee members who answered to a great or very great extent was less—67 to 72 percent—but still the vast majority. When asked if they generally had access to the information they needed to make an informed decision on an issue, about 93 percent of the respondents said they did in either all or most cases. About 76 percent of the respondents said committee members provided somewhat more or much more input than agency officials in formulating committee advice or recommendations. About 79 percent of the respondents thought that committee members should provide somewhat more or much more input than agency officials in formulating committee advice and recommendations. However, respondents from general advisory committees expected and thought actual member input to be less. About 60 percent of the general advisory committee respondents said committee members usually provided somewhat more or much more input than agency officials, and 65 percent said that committee members should provide somewhat more or much more input. In addition, about 26 percent of the general advisory committee respondents, compared to about 16 percent of overall respondents, said input from committee members and agency officials was about equal; and 29 percent, compared to about 18 percent overall, said the input should be equal. About 85 percent of the respondents said that to their knowledge, no agency official had ever asked their committees to give advice or make a recommendation that was based on inadequate data or analysis. Fewer respondents who were members of general advisory committees said “no”—about 77 percent of them said their committees were never asked by agency officials to give advice or make recommendations on the basis of inadequate data or analysis. About 13 percent of the general advisory committee respondents reported that an agency official had made such a request, and 10 percent did not know one way or the other. These latter two percentages were larger than the overall percentages (8 percent and 7 percent, respectively) for the same two questions. About 92 percent of the respondents said that to their knowledge, no agency official had ever asked their committees to give advice or make a recommendation that was contrary to the general consensus of the committees. About 4 percent said officials had made such a request, and 4 percent did not know one way or the other. Eighty-seven percent of the respondents generally or strongly agreed that agencies solicited advice or recommendations from the committees, and about 84 percent said they strongly or generally agreed that the agencies considered the advice or recommendations. Appendix II contains a copy of the questionnaire that we sent to committee members with the weighted number or percentage of committee members responding to each item. FACA sets out at least 17 requirements for agencies to follow in establishing and operating federal advisory committees, including preparing a charter for the committee; developing plans for achieving a fairly balanced membership; keeping detailed minutes of committee meetings; and preparing annual reports to GSA on new, continuing, and terminated committees. (All 17 requirements are listed in app. IV.) We asked the 19 agencies several questions on how useful or burdensome they found FACA requirements. With regard to the requirements overall, 10 agencies viewed them in a positive light. Of these 10 agencies, 6 said the requirements were much more useful than burdensome, and 4 said the requirements were somewhat more useful than burdensome. The views of the other nine agencies were less positive. Of these nine agencies, seven considered the requirements about as burdensome as useful, and two said the requirements were somewhat more burdensome than useful. For each of 17 FACA requirements, we asked the 19 agencies to rate the extent of the requirement’s usefulness. A majority of the agencies (generally more than 10 agencies) rated 14 of the 17 requirements as useful to a moderate, great, or very great extent. Most of the majority frequently rated a requirement’s usefulness as great or very great. For example, 16 agencies said the requirement to create a plan for achieving fairly balanced committee membership was useful to a great or very great extent. Thirteen agencies considered the requirement to keep detailed meeting minutes as useful to a great or very great extent. We asked the agencies to also rate the extent to which they considered each of the 17 requirements as burdensome. In comparison to the number of FACA requirements considered as useful, far fewer requirements were considered as especially burdensome by a majority of the agencies. Four requirements were rated by a majority of the agencies as burdensome to a moderate, great, or very great extent. These four requirements were: develop a plan to achieve balanced committee membership, keep detailed minutes of meetings, fulfill record keeping requirements, and prepare an annual report on each advisory committee. Interestingly enough, all four requirements also had been rated useful to a moderate, great, or very great extent by a majority of the agencies. The agencies’ responses regarding 3 requirements were different from their responses to the other 14. Two requirements—prepare an annual report on closed advisory committee meetings and file advisory committee reports with the Library of Congress—were said by a majority of the agencies to have “little or no” or “some” usefulness or burden. There was a mix of answers for the third requirement—follow-up reports to Congress on recommendations by presidential advisory committees (any federal advisory committee that advises the president). Seven agencies said it was useful to a moderate or greater extent, and six said it was less than moderately useful. Nine agencies said it presented “some” or “little or no” burden, and four agencies said it was burdensome to a moderate or greater extent. Six agencies did not rate the usefulness or burden because they did not have any presidential advisory committees. In rating the 17 requirements, agencies were given the opportunity to say what change they would make to each requirement. Seven agencies made suggestions, and four of them focused on the matter of rechartering committees. FACA prohibits an advisory committee from meeting or taking any action until a committee charter has been filed with certain officials (for example, the agency head) and Congress and requires that charters contain 10 specific items, such as the committee’s objectives and scope of activities and the period of time necessary to carry out its purpose. FACA requires agencies to recharter advisory committees every 2 years regardless of how much more time they will need to accomplish their purposes. Among the suggestions that the seven agencies made, two suggested that rechartering be required every 5 years instead of the current 2 years. Under FACA, peer review panels are treated as advisory committees, and 6 of the 19 agencies indicated that they used peer review panels. Only one of the six thought that peer review panels should be subject to all FACA requirements. The other five agencies said that peer review panels should be exempt from some, most, or all FACA requirements. Although we did not specifically ask why the panels should be exempt from some or all FACA requirements, some of the five agencies indicated that they should be exempt because the nature of the panels’ work was incompatible with FACA requirements. For example, in contrast to the idea of open meetings as promoted by FACA, panel meetings were more often routinely closed to the public to protect the privacy or proprietary rights of those who submitted proposals. Finally, we asked the agencies several burden-related questions that focused on the issue of litigation and FACA. We asked whether the possibility of litigation over compliance with FACA requirements inhibited them from forming new advisory committees and, more specifically, if they decided against forming a new advisory committee anytime during fiscal years 1995 through 1997 because of possible litigation. The overwhelming response of the agencies was that the possibility of future litigation was not an inhibiting factor. Fourteen agencies said that the possibility of future litigation inhibited them to little or no extent. Seventeen agencies said that at no time during fiscal years 1995 through 1997 did they decide not to form a new committee because of the possibility of future litigation. However, some agencies have been involved in litigation over their compliance with FACA. Seven of the 19 agencies reported that they were involved in such litigation during fiscal years 1995 through 1997 and identified 13 lawsuits in total. According to the seven agencies, the major issues being litigated were whether the group that provided information was subject to the requirements of FACA (nine cases), whether the makeup of an advisory committee was balanced (two cases), and procedural issues (two cases). As of the date they were answering the questionnaire, the agencies said that nine cases had been ruled on by the courts; three cases were pending; and one case that was decided in favor of the plaintiff was, in effect, rendered moot by a subsequent amendment to FACA in 1997. According to the agencies, of the nine cases ruled on by the courts, the courts ruled for the agencies in eight cases and for the plaintiff in one. As previously mentioned, Executive Order 12838 established ceilings for each agency on the number of discretionary advisory committees. The number of discretionary committees in the aggregate that the 19 agencies reported having at the end of fiscal years 1995, 1996, and 1997 was about 88 percent, 95 percent, and 95 percent, respectively, of the aggregate ceiling. Twelve of the 19 agencies said the ceilings did not deter them from seeking to establish any new advisory committees. In general, the 12 agencies reported being at or slightly below their ceilings at the end of the 3 years (fiscal years 1995 through 1997) for which we requested data. However, seven agencies said the ceilings did deter them from seeking to establish new discretionary committees. For most of the years for which we requested data, the seven agencies were at or slightly below their ceilings. For those agencies that said they were deterred, we asked them to describe how the ceilings affected their ability to accomplish their missions. Four said they had to reconsider whether an advisory committee would really be necessary or had to give more careful consideration of which committees would continue or which new committees would be established. Two also indicated that committees were not established that may have been warranted, although no numbers of such cases were given. An agency could request approval from OMB to establish a committee that would place it over its ceiling, and 3 of the 19 agencies said they had made such requests over the 3-year period for which we requested information. In total, they said they made four requests to OMB, and OMB approved all four. Of these three agencies, two were among those that said they were deterred from seeking to establish new advisory committees by the ceilings imposed by the executive order. The third agency did not consider the ceiling to be a deterrent. Congress has required agencies to have various advisory committees. According to GSA, there were 422 advisory committees in fiscal year 1997 that had been mandated by Congress. As agreed with your offices, we asked the 19 agencies in our survey whether they had any mandated committees that they believed should be terminated. Six agencies said yes and listed a total of 26 different advisory committees. Of the 26 committees, according to GSA, 17 held no meetings and incurred no costs in fiscal year 1997; 3 incurred some costs ($4,000) but held no meetings; and 6 held meetings (14) and incurred costs (about $190,000). The names of the 26 committees and the agencies they serve are shown in appendix III. Three of the 19 agencies reported that they had made formal requests to Congress to terminate mandated committees during the 3 years for which we requested information (fiscal years 1995 through 1997). These three agencies were among the six agencies that identified committees that they believed should be eliminated. The three agencies asked Congress to terminate 18 mandated committees in total. According to the agencies, Congress terminated one of those committees. The remaining 17 committees were listed among the 26 committees that agencies said should be terminated. Only Congress can terminate a congressionally mandated advisory committee, and we asked the 19 agencies whether they found that requirement burdensome. Twelve agencies indicated that they incurred little or no burden. The other seven agencies indicated that they felt burden, and the extent of it ranged from some to great. We asked them for suggestions to alleviate this burden. Some suggested, in essence, that agencies be given the authority to terminate mandated committees. Agencies made various suggestions, such as that agencies should be given authority to terminate mandated committees after notifying Congress of their intent to do so or after 2 years with notification to congressional authorizing committees and after 4 years without notification. In addition to asking for their suggestions, we asked all 19 agencies their opinions about a sunset/automatic termination for congressionally mandated committees. Their opinions were mixed. Ten agencies said a sunset/automatic termination requirement would be helpful to a moderate, great, or very great extent. Nine agencies said it would provide little or no help or only some help. Appendix IV contains a copy of the questionnaire that we sent to agencies with the number of agencies responding to each item. One intended purpose of FACA is to open government to the public. We asked the advisory committee members and the agencies that we surveyed a series of different questions about public participation. We asked committee members questions about (1) public access to committee meetings; and (2) public input in general to their committees (that is, without regard to whether it was by letter, in person at meetings, or by other means). The answers we received often depended on whether respondents were members of peer review panels or general advisory committees. Those answers indicated that peer review panels were less likely to obtain public access and input than were general advisory committees. The nature of their work may explain why peer review panels do not obtain public input as much as general advisory committees do. About 27 percent of the respondents said that all of their committee meetings were open to the public, and 37 percent said that all meetings were closed to the public. Another 19 percent noted that some meetings or portions of meetings were open and others were closed. Finally, 17 percent of the respondents were not sure what access the public had to their committee meetings. Most of those whose committees held closed or partially closed meetings agreed with their committees’ reasons for closing those meetings to the public. Two reasons frequently cited were discussions involving personal privacy issues and discussions involving trade secrets. According to GSA data, advisory committees frequently hold closed meetings. Agencies reported to GSA that about 58 percent of the 5,700 advisory committee meetings held in fiscal year 1997 were either closed or partially closed. Advisory committee meetings can be closed to the public if the president or the agency head to which the advisory committee reports determines that the meeting may be closed in accordance with provisions of the Government in the Sunshine Act (5 U.S.C. 552b(c)). The provisions provide for closed meetings to protect, for example, matters that need to be kept secret in the interest of national security or foreign policy; trade secrets; and information of a personal nature, the disclosure of which would constitute an invasion of privacy. Respondents who were members of peer review panels—which frequently deal with such proprietary and sensitive information—were much less likely to say their committee meetings were totally open to the public and much more likely to say their meetings were totally closed to the public. About 2 percent of the panelists said their meetings were always open to the public. About 64 percent said their meetings were always closed to the public. About 44 percent of all respondents to our survey said yes and 31 percent said no when asked whether members of the public were ever allowed to express their views to the respondents’ advisory committees. The remaining 25 percent were not sure whether members of the public were allowed to express their views to the committees. Approximately 81 percent of those who replied no or not sure did not believe their committees should provide members of the public with the opportunity to express their views. In comparison to the overall percentages, respondents who were members of peer review panels were much more likely to say the public was not allowed to express views to the committee (52 percent of the panel members who responded), to say they were not sure whether the public was allowed (36 percent), and to believe the public should not be allowed to express their views to the committee (88 percent). We also asked those who said their committees allowed the public to express its views (in other words, the 44 percent who said yes) whether the committees provided sufficient opportunity to the public to express its views. About 59 percent replied that in their opinions, the opportunity was sufficient to a great or very great extent. Another 19 percent thought it was moderately sufficient. In comparison to these overall percentages, respondents from peer review panels were less likely to say the extent was greatly or moderately sufficient. About 21 percent said great or very great while 8 percent said moderate. A sizeable number—about 38 percent—said they had no basis to judge whether the extent was sufficient. We also asked committee members about subcommittees they served on and whether FACA requirements were followed. About 34 percent of the respondents said the committees they served on had subcommittees, and 68 percent of the respondents said they had served on at least one subcommittee over the past year. A majority (about 59 to 72 percent of respondents) said that detailed minutes were kept, and the designated federal officer attended and either approved or called for all or most of the subcommittee meetings. However, less than one-half (about 41 to 45 percent of respondents) said that members of the public were given access to the meetings and allowed to provide input, either in writing or in person, for all or most of the subcommittee meetings. In general terms, most of the agencies—16 of the 19—said FACA had not prohibited them from receiving or soliciting input from public task forces, public working groups, or public forums on issues or concerns of the agency. The three agencies that said FACA has prohibited them explained that they had to limit their prior practice of forming working groups or task forces to address specific local projects or programs, that FACA has made it more cumbersome to seek citizen input because of the staff time required to complete FACA paperwork, or that solicitation of a consensus opinion from a task force or working group may lead to that task force or group being considered a “utilized” committee and thus subject to FACA. Although agencies generally reported that FACA has not prohibited them from obtaining input, there appears to be some concern among agencies about the possibility of being sued for noncompliance with FACA if they obtain input from parties who are outside of the agency and its advisory committees. Eight of the 19 agencies said the possibility of such litigation has inhibited them in obtaining outside input independent of FACA to some, a moderate, or a very great extent. Moreover, six agencies, including five of the previous eight agencies, said there were at least eight instances over the fiscal year 1995 through 1997 period when they decided not to solicit or receive outside input because of their concern about the possibility of future litigation. Agencies determine if members of the public can speak at advisory committee meetings. We therefore asked the 19 agencies whether they permitted members of the public to speak before their advisory committees. Fourteen said yes and 5 said yes and no, indicating that they permitted the public to speak before some committees but not others. In this latter category, the reasons the agencies provided for not permitting the public to speak included time constraints, a need to maintain order, and statutory requirements that meetings be closed for such reasons as protecting classified information or safeguarding privacy act material. When an agency does permit members of the public to speak before its advisory committees, there may be restrictions. According to the agencies, restrictions included public presentations being contingent on the time available at the end of meetings, time limits being imposed on speakers, and members of the public being requested to provide written statements. For members of the public to speak at advisory committee meetings, they must be aware of when a meeting is to occur. FACA requires that specific information be placed in the Federal Register to notify interested parties of the scheduled date, time, and location of advisory committee meetings. Fifteen agencies said they notify the public of scheduled meetings by using methods in addition to the Federal Register, such as posting notices on the Internet; posting notices in newsletters, newspapers, and trade association publications; or mailing notices to stakeholders. However, four agencies said they used only the Federal Register notice. GSA regulations generally require agencies to give 15 days’ advance notice in the Federal Register for committee meetings. Many of the agencies—14 of the 19—said they gave less than this 15 day advance notice at times during fiscal years 1995 through 1997. All together, these agencies said they gave less than 15 days’ advance notice 153 times during the 3 years (fiscal years 1995 through 1997) for which we requested data. This number represented a very small fraction of the 15,885 committee meetings that GSA reported as being held during those years by all advisory committees. We also asked the agencies about subcommittee meetings. The agencies reported that there were 463 subcommittees reporting to full committees in fiscal year 1997. These subcommittees held 926 meetings in fiscal year 1997, and 249 were reportedly not covered under FACA. For the 249 meetings not covered under FACA, agencies reported that the meetings were held for activities such as gathering information, drafting position papers, doing research, and performing analysis. Of eight agencies responding, the majority (five to six agencies) said that FACA requirements, such as Federal Register notices of meetings, detailed minutes, and public access, were not followed for all or most subcommittee meetings. About one-half (four to five agencies) said the subcommittee meetings were approved or called for and attended by the designated federal officer. GSA and OMB provided comments on a draft of this report. On June 11, 1998, we met with the Director of GSA’s Committee Management Secretariat, who said he found the draft report to be very comprehensive, informative, and useful. The Director said that surveying committee members and agencies can provide the Secretariat very useful information to help it manage the federal advisory committee program, and the survey should be done every 3 or 4 years. However, according to the Director, no surveys have been done by the Secretariat and none are planned. The Director explained that the Secretariat lacks the technical expertise as well as the clear authority to conduct surveys of committee members and agencies. The Director said the responses we received from committee members and agencies did not indicate any perceived significant systemic problems with the advisory committee program. However, he said the responses suggested areas that should be examined further, several of which GSA already had been examining and others of which GSA plans to examine. The Director said that GSA can address some of these areas by revising its FACA regulations, but addressing other areas will require legislative changes to FACA. For example, GSA expects to publish proposed regulations in July or August 1998 that will address the definition of an advisory committee. The Director said that GSA recognizes that some agencies or their field offices may sometimes be reluctant to obtain information from the public for fear of violating FACA, and one of GSA’s goals in revising the regulations is to provide clarifying guidance and standards as to when FACA does and does not apply. According to the Director, GSA has been working with the Department of Justice on this definition because Justice is responsible for defending the government in advisory committee litigation. The Director also said that the use of subcommittees by advisory committees is another area that GSA intends to address in its regulations. For example, he believes that it is important for agencies to make uniform determinations of when a subcommittee meeting or other activity would be subject to FACA’s requirements. The Director said that GSA needs to evaluate and work with Congress on the usefulness of some specific FACA requirements, such as sending copies of advisory committee reports to the Library of Congress, and to proactively address the issue of terminating congressionally mandated committees when they no longer serve a useful purpose. He said that GSA was sympathetic to extending the charter period of advisory committees to beyond the 2-year period now stipulated by FACA. The Director also said GSA could possibly support exempting peer review panels from some FACA requirements, but GSA does not favor exempting them from all requirements. For example, he said it is important for the public to have access to information on how agencies ensure that peer review panels have balanced representation and are free from potential conflicts of interest. In addition, he noted that the number of peer review panels and their costs have benefited from the increased accountability provided by FACA and Executive Order 12838. On June 12, 1998, an OMB official responsible for advisory committee matters said that OMB had no comments on the draft report other than that it accurately presented the impact of Executive Order 12838. As agreed with your offices, unless you announce the contents of this report earlier, we plan no further distribution until 30 days after the date of this report. At that time, we will send copies of this report to the Ranking Minority Member, Subcommittee on Government Management, Information, and Technology, House Committee on Government Reform and Oversight; the Chairman, Senate Committee on Governmental Affairs; the Chairman and Ranking Minority Member, House Committee on Government Reform and Oversight; the Acting Director, OMB; the Administrator, GSA; and other interested parties. Copies will be made available to others on request. Major contributors to this report are listed in appendix V. Please contact me on (202) 512-8676 if you or your staff have any questions. The Chairman of the Subcommittee on Government Management, Information, and Technology, House Committee on Government Reform and Oversight; and the Ranking Minority Member of the Senate Committee on Governmental Affairs asked us to review selected matters relating to the Federal Advisory Committee Act (FACA). We addressed several aspects of these separate requests in two previous products. Our objectives in this review were to obtain (1) federal advisory committee members’ perceptions on the extent to which their advisory committees were providing balanced and independent advice and recommendations as required by FACA; (2) federal agencies’ views on the extent to which they found compliance with FACA useful or burdensome, the impact of Executive Order 12838 on their ability to accomplish their missions, and whether any of their advisory committees mandated by Congress should be terminated; and (3) advisory committee members’ and federal agencies’ views on the extent to which they believed the public was afforded access to advisory committee proceedings and a means to express their views to agencies and their advisory committees. To respond to these objectives, we designed and pretested two questionnaires, one of which we later sent to a randomly selected, statistically representative sample of federal advisory committee members and the other of which we sent to all 14 federal departments and to independent agencies with 10 or more advisory committees. Regarding the issue of public participation, we were unable to send a questionnaire to members of the public (individuals and organizations) who may have provided or attempted to provide information to advisory committees because we could not identify the universe of such individuals and organizations from which to draw a statistically representative sample to query. Because a comprehensive listing of the names and addresses for all federal advisory committee members was not available, we requested from federal agencies the names and addresses of members assigned to advisory committees as of August 1, 1997. The Committee Management Secretariat assisted us in making this request to the agencies’ committee management officers. We received the names (and about 95 percent of the addresses) for 28,499 committee members on 783 advisory committees in 43 federal agencies or entities. These numbers were somewhat less than the 36,586 members serving on 963 advisory committees in 57 federal agencies or entities during fiscal year 1997, according to General Services Administration (GSA) summary data as of April 27, 1998. Our survey of federal advisory committee members initially contained a sample of 900 committee members. Beginning on February 25, 1998, we mailed 865 questionnaires to a sample of committee members for whom the agencies provided us with mailing addresses. Committee members who did not respond to our initial questionnaire were sent a follow-up questionnaire beginning on March 31, 1998. Table I.1 summarizes the disposition of our sample of 900 committee members. This sample of 900 committee members was stratified according to the functional types of advisory committees, which we obtained from GSA. The types of committee functions we used to create our sampling strata included grant review, national policy, nonscientific, scientific/technical, and other. We combined the regulatory negotiation and other types and those unclassified by GSA into the functional type “other.” In each of these five strata, we selected a random sample of committee members. We randomly selected 400 of the 13,392 members of grant review committees, 200 of 6,263 members of scientific/technical committees, 180 of 5,586 members of nonscientific committees, 80 of 2,393 members of national policy committees, and 40 of 865 members of the other committees. We received usable questionnaires from 67 percent of the eligible sample. The response rate across the five strata ranged from 62 percent to 72 percent. The overall sample had a confidence interval of no greater than – 4 percent. The confidence interval for the grant review committees was no greater than – 6 percent. The confidence interval for the others, which we refer to as general advisory committees, was no greater than – 5.5 percent. The overall results are generalizable to all federal advisory committee members for whom we had names and addresses. The grant review and general advisory committee members results are generalizable to those types of advisory committees for which we had members’ names and addresses. Although we did not test the validity of the respondents’ answers or the comments they made, we took several steps to check the quality of our survey data. We reviewed and edited the completed questionnaires, made internal consistency checks on selected items, and checked the accuracy of data entry on a sample of surveys. In addition to sampling errors, the practical difficulties of conducting any survey may introduce other types of errors, commonly referred to as nonsampling errors. For example, differences in how a particular question is interpreted by the survey respondents could introduce unwanted variability in the survey’s results. We took steps in the development of the questionnaire, the data collection, and the data editing and analysis to minimize nonsampling errors. These steps, which we discuss earlier, included pretesting and editing the questionnaires. The 19 federal departments and independent agencies to whom we sent questionnaires on February 24, 1998, accounted for 902 of 1,000 (90 percent) advisory committees that existed governmentwide in fiscal year 1996, the latest year for which such data were available at the time we selected the agencies. According to GSA data, the other 98 advisory committees were chartered by 40 federal entities (offices of the Executive Office of the President; independent agencies; and federal boards, commissions, and councils). Table I.2 lists the 19 departments and agencies in our survey and their number of advisory committees during fiscal year 1996. We received completed questionnaires from all 19 agencies. We asked each agency to provide a consolidated response covering all of its various organizational components. Although agency information in this review applies only to the 19 agencies surveyed and cannot be projected governmentwide, this information can be generalized to the 902 advisory committees in the government that we included in our review. We did not verify the accuracy of the data provided by the agencies. To aid us in meeting our objectives, we also interviewed GSA’s Committee Management Secretariat officials and reviewed applicable laws, regulations, and guidance to agencies regarding advisory committee activities. We also reviewed applicable court decisions and our prior GAO reports related to participation by outside parties on advisory committee issues. Jessica A. Botsford, Attorney The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO provided information on the views of federal advisory committees and federal agencies on Federal Advisory Committee Act (FACA) requirements. GAO noted that: (1) overall, the views presented by both the committee members and agencies GAO surveyed provided useful insights into the general operation of FACA as Congress explores possible improvements to FACA; (2) the responses of committee members to a series of questions, when taken together, conveyed a generally shared perception that advisory committees were providing balanced and independent advice and recommendations; (3) although the percentage differed by question, 85 percent to 93 percent of the respondents said their committees were balanced in membership, had access to the information necessary to make informed decisions, and were never asked by agency officials to give advice or make recommendations based on inadequate data or analysis or contrary to the general consensus among committee members; (4) FACA requirements were considered to be more useful than burdensome by 10 of the 19 agencies; (5) for the other nine agencies, the requirements were considered either as burdensome as they were useful or somewhat more burdensome than useful; (6) the ceilings on discretionary advisory committees imposed by Executive Order 12838 did not deter a majority--12 of 19--of the agencies from seeking to establish such committees, according to their responses; (7) agencies identified a total of 26 advisory committees mandated by Congress that they believed should be terminated; (8) this number represented about 6 percent of congressionally mandated advisory committees in existence during fiscal year (FY) 1997; (9) the overall responses GAO received from committee members on the issue of public participation were mixed; (10) about 27 percent of the respondents said that all of their committee meetings were open to the public, and 37 percent said that all of their committee meetings were closed to the public; (10) advisory committee meetings can be closed to the public to protect such things as trade secrets or information of a personal nature; (11) most of the agencies--16 of the 19--did not believe that FACA had prohibited them from soliciting or receiving input from the public on issues or concerns of the agency independent of the FACA process; (12) still, some agencies were reluctant to get input from parties that were not chartered as FACA advisory committees because of concern that this could lead to possible litigation over compliance with FACA requirements; and (13) more explicitly, six agencies reported that they decided not to obtain outside input at least eight times during FY 1995 through FY 1997 because of the possibility of future litigation over compliance with FACA.
Our nation’s border security process for controlling the entry and visits of foreign visitors consists of three primary functions: (1) issuing visas; (2) controlling entries through inspection of passports, visas, and other travel documents as well as controlling exits; and (3) managing stays of foreign visitors—that is, monitoring these individuals while they are in the country. As shown in figure 1, the Departments of State, Homeland Security, and Justice play key roles in this process. The border security process begins at the State Department’s overseas consular posts, where consular officers adjudicate visa applications for foreign nationals who wish to temporarily enter the United States for visits related to business, tourism, or other reasons. At the port of entry, an INS inspector determines whether the visa holder is admitted to the United States and, if so, how long he or she may remain in the country. Until recently, after INS successfully screened and admitted foreign visitors, these individuals were generally not monitored unless they came under the scrutiny of INS or a law enforcement agency, such as the FBI, for suspected immigration violations or other illegal activity. On March 1, 2003, the Department of Homeland Security assumed responsibility for many elements of the border security process. For example, the new department incorporated the INS Inspections Unit into its Bureau of Customs and Border Protection, which will focus its operations on the movement of goods and people across U.S. borders. It also folded the INS National Security Unit into its Bureau of Immigration and Customs Enforcement, which is designed to enforce the full range of immigration and customs laws within the United States. According to Department of Homeland Security officials, the new department also gained broad authority over the visa process under section 428 of the Homeland Security Act, covering the development of policies, regulations, procedures, and any other guidance that may affect visa issuance or revocation.The State Department remains responsible for managing the consular corps and the function of issuing visas. The FBI’s Counterterrorism Division, within the Justice Department, plays a key role in the border security process. The division includes the Foreign Terrorist Tracking Task Force, which is now part of the FBI’s Office of Intelligence. The mission of the task force, an interagency group, is to (1) deny entry into the United States of aliens associated with, suspected of being engaged in, or supporting terrorist activity and (2) aid in supplying information to locate, detain, prosecute, or deport any such aliens already present in the United States. The National Joint Terrorism Task Force is comprised of 36 federal agencies co-located in the Strategic Information and Operations Center at FBI headquarters. This task force provides a central fusion point for terrorism information and intelligence to the 66 Joint Terrorism Task Forces, which include state and local law enforcement officers, federal agents, and other federal personnel who work in the field to prevent and investigate acts of terrorism. At each stage of the process, the responsible departments and agencies rely on terrorist or criminal watch list systems—sometimes referred to as tip- off or lookout systems—in fulfilling their respective border security missions. For example, State relies on its Consular Lookout and Support System (CLASS) as the primary basis for identifying potential terrorists among visa applicants. CLASS incorporates information on suspected terrorists from State’s interagency terrorist watch list, known as TIPOFF, as well as from the FBI, INS, and many other agencies. Further, INS inspectors at ports of entry use the Interagency Border Inspection System (IBIS) to check whether foreign nationals are inadmissible and should be denied entry into the United States. When a person enters the United States by air or by sea, INS inspectors are required to check that person against watch lists before the person is allowed to enter the country. INS inspectors may check persons arriving at land borders against the watch lists, but they are not required to do so. The exception is for males aged 16 or over from certain countries who are required to be checked. Our analysis indicates that the U.S. government has no specific written policy on the use of visa revocations as an antiterrorism tool and no written procedures to guide State in notifying the relevant agencies of visa revocations on terrorism grounds. State and INS have written procedures that guide some types of visa revocations; however, neither they nor the FBI have written internal procedures for notifying their appropriate personnel to take specific actions on visas revoked by the State Department. State and INS officials could articulate their informal policies and procedures for how and what purpose their agencies have used the process as an antiterrorism tool to keep terrorists out of the United States, but neither they nor FBI officials had policies or procedures that covered investigating, locating, and taking appropriate action in cases where the visa holder had already entered the country. We summarized how information on visa revocations would ideally flow among and within these three agencies on the basis of our interviews with officials from State, Homeland Security, and the FBI and on our analysis of the current visa revocation process. According to State Department officials, the U.S. government has no specific written policy on how agencies should use visa revocations as an antiterrorism tool and no written procedures to guide the interagency process for revoking visas on terrorism or other grounds. These officials explained that prior to September 11, 2001, State revoked only a small number of visas for terrorism-related reasons. This relatively small number resulted in State and INS operating in an informal manner when cooperating on denying admission to revoked visa holders at ports of entry. State officials said that State and Justice had agreed to informal notification procedures between the two agencies and had crafted language for the visa revocation certificates several years ago; however, the two agencies did not develop formal written procedures. These officials said that State did not coordinate its visa revocations with the FBI. In commenting on a draft of this report, State said that the Visa Office generally worked under the impression that, under long-standing practice, INS was passing relevant information onto the FBI as appropriate. State and INS officials articulated their agencies’ policies on how revocations help their agencies prevent suspected terrorists from entering the United States. State officials told us that they envision the revocation process as taking place before the visa holder enters the country. This would allow State and other agencies more time to investigate and determine whether a suspected terrorist is in fact ineligible for a visa on terrorism grounds before allowing the visa holder to enter the country. As these officials explained, since the September 11 attacks, State’s Bureau of Consular Affairs has been receiving a large volume of information on suspected terrorists from the intelligence community, law enforcement agencies, overseas posts, and other units within State. The department reviews this information to determine if a suspected terrorist has a U.S. visa. If the identifying information is incomplete, as is often the case, State may have difficulty in determining whether a visa holder with the same or a similar name as a suspected terrorist is in fact the suspected terrorist. The department may also lack sufficient proof of a specific act that would render the suspected terrorist ineligible for a visa, as required by the INA. In these cases, State would revoke the person’s visa under the Secretary of State’s discretionary authority, requiring the person to reapply for a visa if he or she still intended to visit the United States. State would then use the visa issuance process to obtain additional biographic and other data on the visa applicant and make a determination on the person’s eligibility. INS officials viewed the process as a means of notifying INS inspectors to deny suspected terrorists entry into the United States. These officials did not view a visa revocation, even if based on terrorism concerns, as a reason for investigating someone who had already entered the United States. They said the INA does not specify visa revocation as a reason for removing a person from the country. (App. II provides more information on legal issues associated with visa revocations.) According to Justice and FBI officials, the FBI does not yet have a policy on how to use the visa revocation process in its counterterrorism efforts. The FBI has not developed such a policy because the visa revocation information State sends to the bureau does not indicate that the FBI may want to take follow-up action in these cases. For instance, the notice of visa revocation does not explicitly state that the reason for revocation is terrorism-related. State and INS had written policies that covered some aspects of visa revocations. State’s policies and procedures, contained in the Foreign Affairs Manual, specify when and for what reason a consular officer may or may not revoke a visa, including for terrorism-related reasons. The manual instructs consular officers to obtain a security advisory opinion from the department before determining that a visa holder is ineligible for a visa on terrorism grounds. In practice, according to State officials, this means that department officials at headquarters acting under the authority of the Secretary of Statenot the consular officers at overseas postsrevoke visas on the basis of terrorism concerns. State Department officials told us that they follow specific, but unwritten, operating procedures when the department revokes visas, as described in more detail later in this report. INS has some general policies related to the posting of lookouts for inadmissible aliens and for the revocation of visas by immigration officers at ports of entry. However, these policies do not call for specific actions by appropriate INS personnel with regard to visas revoked by the State Department. Since the September 11, 2001, terrorist attacks, State has constantly received new information on suspected terrorists from the intelligence community, law enforcement agencies, and overseas posts. In some cases, State received this information after it had already issued visas to the individuals in question; the department would then revoke these visas. Under the INA, the Secretary of State has discretionary authority to revoke any visa that a consular officer has issued, including cases in which the Secretary believes that the visa holder may be ineligible for a visa under the INA’s terrorism provision. According to State Department officials and documents, State revoked visas held by 240 individuals from September 11, 2001, through December 31, 2002, on terrorism grounds. All of these visas were revoked as a prudent measure under the Secretary of State’s discretionary authority because, as discussed earlier, State believed more research on the individuals was necessary before they should be allowed to enter the United States. Appendix III provides more information on these visas and the persons who held them. Figure 2 shows how information should flow if State were to notify the appropriate homeland security agencies, that is, those agencies charged with controlling entry into the United States and investigating potentially dangerous terrorists, that the individual with the revoked visa may attempt to enter, or may have already entered, the United States. The diagram is based on what officials from State, Homeland Security, and the FBI described as the way the process should work, if all of the agencies involved were fulfilling their roles. As the diagram in figure 2 illustrates, State should notify its consular officers at overseas posts, the Department of Homeland Security, and the FBI at the time of visa revocation. State should notify its consular officers so that they would ask for a security advisory opinion before issuing a new visa to the person whose visa had been revoked. In addition, State would have to provide notice of the revocation, along with supporting evidence, to Homeland Security and the FBI. This would allow Homeland Security to notify its inspectors at ports of entry so that they could prevent the individuals from entering the United States. It also would allow Homeland Security and the FBI to determine whether the person had already entered the country and, if so, to investigate, locate, and take appropriate action in each case. Depending on the results of the investigations, appropriate actions could include clearing persons who were wrongly suspected of terrorism, removing suspected terrorists from the country, or prosecuting suspected terrorists on criminal charges. We identified systemic weaknesses in the visa revocation process, many of which resulted from the informal policies and procedures governing actions that State, INS, and the FBI take during the process. In our review of the 240 visa revocations, we found that (a) notification of revocations did not always reach the appropriate unit within INS and the FBI; (b) State did not consistently post lookouts on the individuals; (c) 30 individuals whose visas were revoked on terrorism grounds entered the United States either before or after the revocation and may still remain in the country; and (d) INS and the FBI were not consistently taking action to investigate; locate; or, where appropriate, clear, prosecute, or remove any of the people who had entered the country before or after their visas were revoked. There were weaknesses at several junctures of the notification process that caused information on many visa revocations not to be shared among units that needed the information at State, INS, and the FBI. Some of these weaknesses were due to a breakdown in the notification process from State to INS and the FBI, and some were due to problems in the distribution of notifications within these agencies to the appropriate unit. For 43 of the 240 revocations we reviewed, INS Lookout Unit officials said that they did not receive any notification. In cases where they did receive notification, some of them were not received at the Lookout Unit in a timely manner because of slow intraagency distribution of the notifications. FBI officials said that the agency’s main communications center received the notifications, but the officials could not confirm if the notifications were then distributed internally to the appropriate investigative units at the FBI (see fig. 3). State Department officials from the Visa Office described the procedures they use to notify INS, the FBI, and State’s overseas posts of visas that are revoked by the department in Washington. According to State officials, once the Deputy Assistant Secretary signs a revocation certificate, the department is supposed to take the following actions, as soon as possible after the visa is revoked: (1) notify the INS Lookout Unit via a faxed copy of the revocation certificate so that the unit can enter the individual into the National Automated Immigration Lookout System, which is uploaded into IBIS; (2) notify consular officers at all overseas post that the individual may be a suspected terrorist by entering a lookout on the person into State’s watch list, CLASS; and (3) notify the issuing post via cable so that the post can attempt to contact the individual to physically cancel his visa. Information-only copies of these cables, which do not explicitly state that the reason for the revocation is terrorism-related, are also sent to INS’s and FBI’s main communications centers. State officials told us they rely on INS and FBI internal distribution mechanisms to ensure that these cables are routed to the appropriate units within the agencies. According to these officials, they considered faxing the revocation certificate to be the primary notification method for the INS Lookout Unit, but the cable was an additional backup method. The cables were the only notification method used to inform the FBI of the revocation. The State Visa Office did not keep a central log of visas it revoked on the basis of terrorism concerns, nor did it monitor whether notifications were sent to other agencies. When we asked for a list of all revoked visas between September 11, 2001, and December 31, 2002, Visa Office officials had to search through the office’s cable database to create such a list. State Department officials said they did not have fax transmission receipts to confirm that they sent revocation certificates for each of the 240 cases we reviewed. They were able to provide us with 238 revocation cables, almost all of which addressed informational copies to INS and the FBI. In commenting on a draft of our report, State said that the Visa Office now keeps a log of revocation cases and maintains all signed certificates in a central file. Officials from the INS Lookout Unit provided us with documentation indicating that they received notification from the State Department in 197 of the 240 cases but did not receive notification in the other 43 cases (see fig. 4). Cases in which the Lookout Unit received notification via faxed revocation certificate via revocation cable only Cases in which the Lookout Unit did not receive notification Lookout Unit officials had documentation to show that 150 faxed revocation certificates were received in the unit. These faxed certificates reached the unit, on average, within 1 to 2 days of State enacting the revocation. For 90 cases, however, the documentation provided to us did not indicate that the Lookout Unit had received a fax. This was mitigated in 47 of these cases by the receipt of a revocation cable, although this backup method of notification was less timely than the fax. In cases where the cable was the only notification received at the Lookout Unit, it took, on average, 12 days for the Lookout Unit to receive the cable, although in 1 case it took 29 days. According to an official from the INS communications center, because the cables were marked “information only,” they were routed through the Inspections Division first, which then was supposed to forward them to the Lookout Unit. He told us that if the cables had been marked as “action” or “urgent,” they would have been sent immediately to the Lookout Unit. See appendix IV for an example of a revocation cable. The Assistant Chief Inspector at the Lookout Unit stressed the importance of timeliness in receiving notification, noting that delays of even a few days could increase the possibility that an individual with a revoked visa would travel to the United States before INS inspectors were aware of the revocation. The State Department generally included the FBI as an addressee on the visa revocation cables. FBI officials with whom we spoke were able to verify that State’s revocation cables were received electronically in the FBI communications center, but they were not able to tell us whether this information was distributed to appropriate coordinating and investigative units. An FBI official said that after the cables arrived in the communications center, they became part of the FBI’s Automated Case Support database and a hard copy of the cable was sent to analysts in relevant country desk units. The Assistant Director for the Office of Intelligence told us that for the FBI to take action on the cables, they would have to be directed to the bureau’s Counterterrorism Division. FBI officials could not provide evidence that the revocation information reached the Counterterrorism Division. Again, the cables did not specify that the reason for the revocation was related to terrorism. The cables were described by State as information only and did not request or specify any action from the FBI. In our review of 240 revocations, we identified weaknesses in the steps that State, INS, and the FBI took to place these individuals on watch lists as a result of the revocation. The State Department did not consistently post lookouts on individuals in CLASS after revoking their visas. Moreover, State had not started to use a new revocation code created in August 2002 that was designed to allow revocation lookouts to be shared between State’s and INS’s watch lists. The INS Lookout Unit consistently posted lookouts on its watch list but was only able to do so in cases where it received notification of the revocation. Some of the lookouts posted by the Lookout Unit did not contain accurate information due to misinterpretation of State’s revocation certificates. As of mid-May 2003, FBI officials could not determine which FBI unit, if any, added lookouts to their watch lists on individuals with revoked visas as a result of receiving the revocation notification from State. We reviewed CLASS records on all 240 individuals whose visas were revoked and found that the State Department did not post lookouts within a 2-week period of the revocation on 64 of these individuals. Many of the 64 individuals had other lookouts posted on them on earlier or later dates, but the department had not followed its informal policy of entering a lookout at the time of the revocation. State officials said that they post lookouts on individuals with revoked visas in CLASS so that, if the individual attempts to get a new visa, consular officers at overseas posts will know that they must request a security advisory opinion on the individual before issuing a visa. Without a lookout, it is possible that a new visa could be issued without additional security screening. According to State Department officials, State and INS agreed to create a specific code for visa revocation lookouts, the VRVK code, which would be picked up automatically by INS’s system, IBIS, in its real-time interface with CLASS. This new code would allow INS inspectors at ports of entry to see revocation lookouts that State had posted. According to Department of Homeland Security officials, this code should be State’s primary method of notifying immigration inspectors at ports of entry that an individual’s visa had been revoked, rather than the faxed revocation certificate. State said that this code was required for all revocation lookouts as of August 15, 2002, yet in our review of CLASS records for the 240 visa revocations, we saw no evidence that the department was using the VRVK code. The department did not enter a lookout using the VRVK code for any of the 27 visas it revoked between August 15, 2002, and December 31, 2002. When the INS Lookout Unit received notification from State, it consistently posted lookouts in IBIS to indicate that State had revoked the visa. The Lookout Unit had a policy to post lookouts in IBIS the same day that it received the notification. In the 43 cases for which Lookout Unit officials said they did not receive notification, they did not post a revocation lookout in IBIS because the lookout unit did not have an independent basis for posting a revocation absent a notification from State. In 21 of the 240 cases, Lookout Unit officials misread information on State’s revocation certificate and, as a result, entered incorrect information in IBIS on individuals who were born in one country but hold citizenship in another. In 16 of these cases, the revocation certificates clearly listed the individual’s date and place of birth or nationality, but the lookout unit entered place of birth or other erroneous information into IBIS’s nationality field. In the remaining 5 cases where the individuals’ place of birth data were entered into the nationality field, the revocation certificate did not clearly state that the country listed was the individuals’ place of birth. A Lookout Unit official confirmed that this error in the lookout could hinder an inspector at the port of entry from detecting the person since the individual’s passport would indicate a nationality different from his place of birth. Lookout Unit officials said it would be helpful if the State Department included more information on the revocation certificates, including country of citizenship, passport numbers, visa foil numbers, and intended itineraries and addresses in the United States if they were listed in the visa application. See appendix V for a sample revocation certificate. In commenting on a draft of this report, State said that additional information is available to Homeland Security officers at ports of entry through State’s shared Consular Consolidated Database. FBI officials could not determine which unit, if any, received the revocation cables or whether any unit posted lookouts on these individuals as a result of receiving notification of the revocation from State. In technical comments on a draft of this report, the Department of Justice said that the FBI maintains only one watch list, the Violent Gang and Terrorist Organization File (VGTOF) that is accessed by local and state law enforcement officials via the National Crime Information Center. To add a person to that list, according to the comments, the following information must be provided to the FBI: the person’s full name, complete date of birth, physical descriptors, at least one numeric identifier, a contact person with a telephone number, and VGTOF-specific classification information. In our review of the 240 visa revocations, we found that 30 individuals whose visas were revoked on terrorism grounds entered the United States either before or after the revocation and may still remain in the country. Our analysis of INS arrival and departure information shows that many individuals had traveled to the United States before their visas were revoked and had remained after the revocation. Several have subsequently departed the country, but we determined that 29 of the individuals who entered before the revocation may still remain in the country. INS data also show that INS inspectors admitted at least 4 people after their visas were revoked; 3 of these individuals have since departed but 1 may still remain in the country. In 1 of these 4 cases, the INS Lookout Unit did not receive any revocation notice from State; thus, it did not post a lookout in IBIS that could have alerted an inspector at a port of entry to deny admission to the individual. In another case, the unit received a notification cable 4 days after State had signed the revocation certificate, but the individual had already entered the country 2 days earlier. In the third case, the unit had posted a lookout the day after the revocation but had incorrectly entered the individual’s place of birth, which differed from his nationality, in the nationality field. In the last case, INS had received a notification from State and had posted lookouts on the INS watch list right after the revocation, but an INS inspector allowed the individual to enter the United States 1 month later. INS officials could not explain how an inspector could miss the lookout and allow this person into the country. Despite these problems, we noted cases where the visa revocation process prevented possible terrorists from entering the country or cleared individuals whose visas had been revoked. For example, INS inspectors successfully prevented at least 14 of the 240 individuals from entering the country because the INS watch list included information on the revocation action or had other lookouts on them. In addition, State records showed that a small number of people reapplied for a new visa after the revocation. State used the visa issuance process to fully screen these individuals and determined that they did not pose a security threat. In one case, for example, the post took a set of fingerprints from an individual whose name matched a record in an FBI database. The individual’s fingerprints did not match those of the individual in the database, so he was cleared and issued a new visa. The appropriate units in INS and the FBI did not routinely investigate, locate, or take any action on individuals who might have remained in the United States after their visas were revoked. INS and FBI officials cited a variety of legal and procedural challenges to their taking action in these cases. In cases where they received the revocation notification from State, INS Lookout Unit officials said that they did not routinely check to see whether these individuals had already entered the United States, nor did they pass information on visa revocations to investigators in the National Security Unit.The National Security Unit, unlike the Lookout Unit, did not receive copies of the faxed revocation certificates or cables from the State Department. Investigators in this unit said that the Lookout Unit occasionally notified them about a revocation for an individual with a hit in TIPOFF, State’s interagency terrorist watch list, but that they were not typically notified of other visa revocations. National Security Unit investigators said that they generally did not investigate or locate individuals whose visas were revoked for terrorism concerns but who may still be in the United States. These investigators said that even if they were to receive a revocation notice, the revocation itself does not make it illegal for individuals with revoked visas to remain in the United States. They said they could investigate the individuals to determine if they were violating the terms of their admission, for example, by overstaying the amount of time they were granted to remain in the United States, but the investigators believed that under the INA, the visa revocation itself does not affect the alien’s legal status in the United States. This issue of whether a visa revocation, after an alien is admitted on that visa, has the effect of rendering the individual out-of-status is unresolved legally, according to officials in the Department of Homeland Security’s Office of the Principal Legal Advisor to the Bureau of Immigration and Customs Enforcement and Bureau of Citizenship and Immigration Services. These officials said that the language that the State Department has been using on visa revocation certificates effectively forecloses the U.S. government from litigating the issue. The revocation certificates state that the revocation shall become effective immediately on the date the certificate is signed unless the alien is present in the United States at that time, in which case it will become effective immediately upon the alien’s departure from the United States (see app. V). Homeland Security officials said that if State were to cease using the current language on the revocation certificates, the government would no longer be effectively barred from litigating the issue and, if a policy decision were made to pursue an aggressive litigation strategy, could seek to remove aliens who have been admitted but have subsequently had their visas revoked. Attempting to remove these aliens on the underlying reason for the revocation may not be possible for various reasons, according to INS officials. First, INS officials stated that the State Department provides very little information or evidence relating to the terrorist activities when it sends the revocation notice to INS. Without sufficient evidence linking the alien to any terrorist-related activities, INS cannot institute removal proceedings on the basis of that charge. Second, even if there is evidence, INS officials said, sometimes the agency that is the source of the information will not authorize the release of that information because it could jeopardize ongoing investigations or reveal sources and methods. Third, INS officials state that sometimes the evidence that is used to support a discretionary revocation from the Secretary of State is not sufficient to support a charge of removing an alien in immigration proceedings before an immigration judge. (See app. II.) In commenting on a draft of our report, State said that most of the time, the information on which these revocations is based is classified. If an interested agency seeks to review the information for immigration purposes, it is available from State’s Bureau of Intelligence and Research or the source agency. National Security Unit investigators told us that, because of congressional interest, they had investigated and attempted to locate 7 individuals whose visas were revoked as a result of delayed security checks and who had entered the country. They found that 4 of the 7 individuals were in the United States and in compliance with the terms of their admission. One individual had departed to Canada; the remaining 2 individuals were not located. Although the FBI’s Foreign Terrorist Tracking Task Force followed up on many cases in response to congressional interest, FBI officials told us that the bureau was not routinely opening investigations as the result of visa revocations on terrorism grounds. They said that State’s method of notifying the FBI did not clearly indicate that visas had been revoked because the visa holder was a possible terrorist. Further, the cables were sent as “information only” and did not request specific follow-up action from the FBI. State did not attempt to make other contact with the FBI that would indicate any urgency in the matter. Moreover, the Department of Homeland Security has not yet requested that the FBI take any action with regards to visa revocations on terrorism grounds. In response to congressional interest, the Foreign Terrorist Tracking Task Force in late 2002 and early 2003 followed up on the 105 cases of visas that were revoked as a result of the Visas Condor name check procedures. In February 2003, we asked the task force for information on these 105 cases. The task force provided us with some information in a written response on May 21, 2003. We did not have time to fully evaluate the response before publication of this report because of the nature and volume of additional information needed to do so. The visa process can be an important tool to keep potential terrorists from entering the United States. Ideally, information on suspected terrorists would reach the State Department before it decides to issue a visa. However, there will always be some cases when the information arrives too late and State has already issued a visa. Revoking a visa can mitigate this problem, but only if State promptly notifies the appropriate border control and law enforcement agencies and if these agencies act quickly to (1) notify border patrol agents and immigration inspectors to deny entry to persons with a revoked visa and (2) investigate persons with revoked visas who have entered the country. Currently there are major gaps in the notification and investigation processes. One reason for this is that there are no comprehensive written policies and procedures on how notification of a visa revocation should take place and what agencies should do when they are notified. As a result, there is heightened risk that suspected terrorists could enter the country with revoked visas or be allowed to remain after their visas are revoked without undergoing investigation or monitoring. To strengthen the visa revocation process as an antiterrorism tool, we recommend that the Secretary of Homeland Security, in conjunction with the Secretary of State and the Attorney General: develop specific policies and procedures for the interagency visa revocation process to ensure that notification of visa revocations for suspected terrorists and relevant supporting information is transmitted from State to immigration and law enforcement agencies, and their respective inspection and investigation units, in a timely manner; develop a specific policy on actions that immigration and law enforcement agencies should take to investigate and locate individuals whose visas have been revoked for terrorism concerns and who remain in the United States after revocation; and determine if persons with visas revoked on terrorism grounds are in the United States and, if so, whether they pose a security threat. We provided a draft of this report to the Departments of Homeland Security, State, and Justice for their comment. The Department of Homeland Security agreed that the visa revocation process should be strengthened as an antiterrorism tool. It indicated that it looked forward to working with State and Justice to develop and revise current policies and procedures that affect the interagency visa revocation process. Their written comments are in appendix VI. In addition, Homeland Security provided technical comments which we have incorporated in the report where appropriate. The Department of State did not comment on our recommendations. Instead, State said that the persons who hold visas that the department revoked on terrorism grounds were not necessarily terrorists or suspected terrorists. State noted that it had revoked the visas because some information had surfaced that may disqualify the individual from a visa or from admission to the United States, or that in any event warrants reconsideration of the individual’s visa status. State cited the uncertain nature of the information it receives from the intelligence and law enforcement communities on which it must base its decision to revoke an individual’s visa. State said that it revoked these visas as a precautionary measure to preclude a person from gaining admission to this country until his or her entitlement to a visa can be reestablished. Our report recognizes that the visas were revoked as a precautionary measure and that the persons whose visas were revoked may not be terrorists. Although we have not reviewed the intelligence or law enforcement data provided to State or reviewed by various agencies as part of the security check process, there was enough concern that these 240 persons could pose a terrorism threat to cause State to revoke their visas. Our recommendations are designed to ensure that persons whose visas have been revoked because of potential terrorism concerns be denied entry to the United States and those who may already be in the United States be investigated to determine if they pose a security threat. State’s comments are reprinted in appendix VII. The State Department also provided technical comments that we have incorporated in the report where appropriate. The Department of Justice did not provide official comments on the report. However, it did make technical comments that we incorporated in the report where appropriate. We are sending copies of this report to other interested Members of Congress. We are also sending copies to the Secretary of Homeland Security, the Secretary of State, and the Attorney General. We will make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-4128. Key contributors to this report were John Brummet, Judy McCloskey, Kate Brentzel, Mary Moutsos, and Janey Cohen. The scope of our work covered the interagency process in place for visas revoked by the Department of State headquarters and overseas consular officers on the basis of terrorism concerns between September 11, 2001, and December 31, 2002. To assess the policies and procedures governing the visa revocation process, we interviewed officials from State, the Immigration and Naturalization Service (INS), and the Federal Bureau of Investigation (FBI) and reviewed relevant documents. To evaluate the effectiveness of the actual visa revocation process, we relied on data provided by State’s Visa Office to determine the total number of visa revocations from September 11, 2001, through December 31, 2002. Visa Office officials provided us with the names of 240 individuals whose visas were revoked during that time. These officials were able to provide documentation on the revocation for 238 of the 240 individuals. They gave us database sheets from the Consular Consolidated Database, which provided us with the individuals’ names, biographic data such as dates and places of birth, passport numbers, and visa information such as issuing posts and types of visa. In 5 cases, the database sheets did not indicate that the person held a valid visa at the time of revocation. We kept these cases in our scope because State provided us with revocation cables for these individuals, indicating that it had revoked at least one visa for them. State’s Visa Office also provided us with 238 revocation cables. We also compared information in the revocation cable with information contained in revocation certificates. To determine if, and when, State notified INS of the revocations, we asked the Visa Office to provide us with documentation to show that either the visa revocation was faxed to the INS Lookout Unit or that the revocation cables were sent to INS. State did not have documentation that it had faxed any of the certificates. Through examining the cables, we determined which ones were addressed to INS and when they were sent. To determine if, and when, INS received these notifications, we asked the INS Lookout Unit for copies of the revocation certificates and cables it received for each of the 240 cases. In cases where the Lookout Unit had received a faxed copy of the revocation certificate, we collected copies of the certificates and examined the time/date stamp on these documents to determine when State faxed it to INS. In cases where the Lookout Unit had received a copy of the revocation cable, we collected copies of these cables and examined handwritten notations on the cables that reflected when they were received at the unit. To determine if, and when, State notified the FBI of the revocations, we examined copies of the revocation cables we received from State to determine (1) if the FBI was included as an addressee on the cable and (2) the date that the cable was sent. To determine whether the FBI had received these cables, we interviewed FBI officials from the Office of Intelligence, the National Namecheck Program, and the Counterterrorism Division. We obtained information from State, INS, and the FBI to determine if, and when, they posted lookouts on the individuals with revoked visas on their agencies’ terrorist watch lists. We asked State to provide us with the lookouts they posted for each individual in the Consular Lookout and Support System (CLASS). A CLASS operator entered the individual’s name, date and place of birth, and nationality in the same way that these data were listed on the revocation cable or certificate and gave us the printouts reflecting all of the CLASS records for that entry. We examined the records to ascertain whether, and when, the department entered the individual into CLASS and what refusal code was used. To determine what steps INS took to post lookouts on the individuals with revocations, we provided the Lookout Unit with the list of 240 individuals and requested copies of the revocation lookouts from the Interagency Border Inspection System (IBIS). We examined these records to assess whether, and when, the INS Lookout Unit posted a lookout on the individuals. To assess the FBI’s action to post lookouts on these individuals, we interviewed officials from the Office of Intelligence to determine whether any units posted lookouts as a result of receiving notification of the revocations. To assess INS’s and the FBI’s actions to investigate; locate; and, where appropriate, clear, remove, or prosecute the individuals who may have entered the United States, we first reviewed INS entry/exit data to determine how many individuals entered the country, either before or after revocation, and how many may still remain in the country. The INS Lookout Unit provided us with all records available from the Nonimmigrant Information System (NIIS) on each of the 240 individuals. This system records arrivals of foreign citizens through the collection of an I-94 form. Some aliens are required to fill out and turn in these forms to inspectors at air and sea ports of entries, as well as at land borders. Canadians and U.S. permanent residents are not required to fill out I-94 forms when they enter the United States. Aliens keep one section of the I-94 with them during their stay in the United States and are required to turn this in when they depart the country. If aliens fail to turn in the bottom portion of their I-94s when they depart, NIIS will not have departure information for them. Where available, we supplemented NIIS data with information regarding certain cases from INS’s National Security Unit and from the State Department’s CLASS records. We received additional arrival data on the individuals in late May 2003 but have not been able to fully evaluate them for this report. We also interviewed INS and FBI officials to discuss what actions they had taken to investigate; locate; and, where appropriate, clear, remove, or prosecute those individuals who may remain in the United States. We attempted to review the evidence on which State based the revocations for a subset of the 240 visa revocations. We could not do so, however, because the sources of the information—the Central Intelligence Agency and the FBI—did not grant us access to this information. We conducted our work from December 2002 through May 2003, in accordance with generally accepted government auditing standards. The legal process for revocations can begin either with the Secretary of State, the consular officer, or an immigration officer. Under the Immigration and Nationality Act (INA), the Secretary of State has the discretionary authority to revoke a visa previously issued to an alien. The Secretary of State has delegated this discretionary authority to the Deputy Assistant Secretary for Visa Services. According to State officials, the department’s discretionary revocation authority is an important and useful tool for State to use to send questionable aliens back to the consulates to undergo more scrutiny as they reapply for new visas. Consular officers may revoke a visa in instances prescribed by regulation (22 CFR § 41.122). Such instances include if (1) the consular officer finds that the alien is no longer entitled to nonimmigrant status specified in the visa; (2) the alien has, since the time that the visa was issued, become ineligible to receive a visa under the INA; or (3) the visa has been physically removed from the passport in which it was issued. Moreover, regulations also allow immigration officers to revoke visas under certain circumstances (22 CFR § 41.122). For example, an immigration officer at a port of entry may revoke a visa if the officer notifies the alien that he or she appears to be inadmissible to the United States and the alien requests and is granted permission to withdraw the application for admission. If an alien arrives at a port of entry in the United States and learns that his visa has already been revoked, as was the case with some of the revocations that we reviewed, then the alien is deemed inadmissible and the INS agent can deny the alien admission into the United States. The authority to refuse admission to such aliens is done under the expedited removal process allowed under section 235 of the INA. Under section 212(a)(7)(B) of the INA, an alien is inadmissible if he does not have a valid passport, nonimmigrant visa, or border crossing identification card at the time of application for admission. Under the INA’s expedited removal process, if an alien is inadmissible under section 212(a)(7), the inspection officer may order the alien removed from the United States, without further hearing or review, unless the alien can demonstrate a credible fear of returning to his home country. If, however, the alien is already in the country when his visa is revoked, then INS is not authorized to simply send the alien home, as it could have done had the alien arrived at the port of entry with the revoked visa. Rather, if INS determines that the alien falls within the class of aliens who are removable on the grounds specified in the INA, INS may institute removal proceedings against the alien. Such proceedings could be based either on an immigration violation after admission or on the evidence relating to the reason for the visa revocation, such as terrorist-related activities. However, INS officials said that in many of these cases, INS does not receive much evidence in support of the terrorist charge when they receive a revocation from State. Without sufficient evidence, INS cannot institute removal proceedings against these aliens. Revocation of a visa is not a stated grounds for removal under the INA. However, the issue of whether a visa revocation, after an alien is admitted on that visa, has the effect of rendering the alien out-of-status is unresolved legally, according to officials in the Department of Homeland Security’s Office of the Principal Legal Advisor to the Bureau of Immigration and Customs Enforcement and the Bureau of Citizenship and Immigration Services. These officials said that the language that the State Department has been using on visa revocation certificates effectively forecloses the U.S. government from litigating the issue. The revocation certificates state that the revocation shall become effective immediately on the date the certificate is signed. However, if the alien is present in the United States at that time, it will become effective immediately upon the alien’s departure from the United States. Homeland Security officials said that if State were to cease using this language on the revocation certificates, the government would no longer be effectively barred from litigating the issue, and, if a policy decision were made to pursue an aggressive litigation strategy, the government could seek to remove aliens who have been admitted but have subsequently had their visas revoked. If INS does receive sufficient evidence to support a removal charge against an alien and chooses to initiate removal proceedings, then the alien is afforded certain due process rights under the INA. For example, section 240 of the INA states that an immigration judge shall conduct proceedings to determine if an alien is removable. During such proceedings, the alien is afforded rights that include being apprised of the charges against him and the basis for them, having a reasonable opportunity to examine the evidence against him, presenting evidence on his behalf, having the opportunity to cross-examine witnesses presented by the government, and filing administrative and judicial appeals. Moreover, during such removal proceedings, once an alien establishes that he was admitted to the United States as a nonimmigrant, the government has the burden of proof to establish by clear and convincing evidence that the alien is removable. Initiating such proceedings against an alien whose visa has been revoked on the basis of terrorist-related activities can be challenging, according to INS attorneys. At some point in the proceedings, either in establishing that the alien is removable or at the time the alien requests to be released on bond, the government could be called on to disclose any classified or law enforcement sensitive information that serves as the basis of the charges against the alien. According to INS attorneys, this can be challenging since many times the law enforcement or intelligence agencies that are the source of the information may not authorize the release of that information because it could jeopardize ongoing investigations or reveal sources and methods. In addition to the general removal proceedings, the INA also contains special removal proceedings for alien terrorists. These proceedings are reserved for alien terrorists as described in section 237 (a)(4)(B) of the INA and take place before a special removal court comprised of federal court judges. Such proceedings are triggered when the Attorney General certifies to the removal court that the alien is a terrorist, that he is physically present in the United States, and that using the normal removal procedures of the INA would pose a risk to the national security of the United States. If the court agrees to invoke the special removal procedures, then a hearing is held before the removal court. Special provisions are made for the use of classified information in such proceedings to minimize the risk of its disclosure. However, similar to the removal proceedings under section 240, the alien has the right to appeal a decision by the removal court. According to INS officials, this court has never been used since its inception in 1996. This appendix provides information on nonimmigrant visas that the State Department revoked on terrorism grounds from September 11, 2001, through December 31, 2002—specifically, the nationality of the individuals whose visas were revoked and the types of visas that were revoked. As shown in table 1, the individuals holding visas that the State Department revoked on terrorism grounds came from at least 39 countries. Five countries—Saudi Arabia, Iran, Egypt, Pakistan, and Lebanon—accounted for 53 percent of these individuals. Overall, most of the 240 people were citizens of countries in the Near East and North Africa region. (Continued From Previous Page) Table 2 provides information on the types of visas that the State Department revoked on terrorism grounds. About 70 percent of the visas were for temporary visits for business, pleasure, or both. Seven of these visas were in the form of border crossing cards for Canada and Mexico. The following are GAO's comments on the Department of State’s letter dated June 10, 2003. 1. The scope of our review covered all visas revoked on terrorism concerns by the State Department, including headquarters officials and State’s overseas consular officers, from September 11, 2001, through December 31, 2002. State Department officials determined that the total universe of such revocations consisted of 240 cases during that period and provided documentation for almost all of them. Headquarters officials, acting under the authority of the Secretary of State, revoked the visas in all of the cases. As noted in State’s comments, in none of the cases did State believe that it had sufficient evidence to support a formal finding of inadmissibility; thus, all of the revocations were done as a precautionary measure. 2. Pages 10 and 11 of our report include information on this matter. 3. We agree that these individuals may not be terrorists. However, the State Department has revoked their visas because of terrorism concerns. Our recommendations are designed to ensure that persons whose visas have been revoked because of potential terrorism concerns be denied entry to the United States and those that may already be in the United States be investigated to determine if they pose a security threat. 4. The Departments of State and Homeland Security have different views on this issue. Homeland Security believes that the language that the State Department has been using on visa revocation certificates effectively forecloses the U.S. government from litigating the issue of whether a visa revocation has the effect of rendering the individual as out-of-status (see p. 25 of our report). Our recommendations, if implemented, would help resolve these conflicting views. The General Accounting Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to e-mail alerts” under the “Order GAO Products” heading.
The National Strategy for Homeland Security calls for preventing the entry of foreign terrorists into our country and using all legal means to identify; halt; and, where appropriate, prosecute or bring immigration or other civil charges against terrorists in the United States. GAO reported in October 2002 that the Department of State had revoked visas of certain persons after it learned they might be suspected terrorists, raising concerns that some of these individuals may have entered the United States before or after State's action. Congressional requesters asked GAO to (1) identify the policies and procedures of State, the Immigration and Naturalization Service (INS), and the Federal Bureau of Investigation (FBI) that govern their respective visa revocation actions and (2) determine the effectiveness of the process. The U.S. government has no specific written policy on the use of visa revocations as an antiterrorism tool and no written procedures to guide State in notifying the relevant agencies of visa revocations on terrorism grounds. Further, State, INS, and the FBI do not have written internal procedures for notifying their appropriate personnel to take specific actions on visas revoked by the State Department. State and INS officials said they use the revocation process to prevent suspected terrorists from entering the country, but none of the agencies has a policy that covers investigating, locating, and taking action when a visa holder has already entered. This lack of formal written policies and procedures has contributed to systemic weaknesses in the visa revocation process that increase the possibility of a suspected terrorist entering or remaining in the United States. In our review of 240 visa revocations, we found that appropriate units within INS and the FBI did not always receive notifications of all the revocations; names were not consistently posted to the agencies' watch lists of suspected terrorists; 30 individuals whose visas were revoked on terrorism grounds had entered the United States either before or after revocation and may still remain; and INS and the FBI were not routinely taking actions to investigate, locate, or resolve the cases of individuals who remained in the United States after their visas were revoked.
DOD’s health system, TRICARE, currently offers health care coverage to approximately 6.6 million active duty and retired military personnel under age 65 and their dependents and survivors. An additional 1.5 million retirees aged 65 and over can obtain care when space is available. TRICARE offers three health plans: TRICARE Standard, a fee-for-service plan; TRICARE Extra, a preferred provider plan; and TRICARE Prime, a managed care plan. In addition, TRICARE offers prescription drugs at no cost from MTF pharmacies and, with co-payments, from retail pharmacies and DOD’s National Mail Order Pharmacy. Retirees have access to all of TRICARE’s health plans and benefits until they turn 65 and become eligible for Medicare. Subsequently, they can only use military health care on a space-available basis, that is, when MTFs have unused capacity after caring for higher priority beneficiaries. However, MTF capacity varies from a full range of services at major medical centers to limited outpatient care at small clinics. Moreover, the amount of space available in the military health system has decreased during the last decade with the end of the Cold War and subsequent downsizing of military bases and MTFs. Recent moves to contain costs by relying more on military care and less on civilian providers under contract to DOD have also contributed to the decrease in space-available care. Although some retirees age 65 and over rely heavily on military facilities for their health care, most do not, and over 60 percent do not use military health care facilities at all. In addition to using DOD resources, retirees may receive care paid for by Medicare and other public or private insurance for which they are eligible. However, they cannot use their Medicare benefits at MTFs, and Medicare is generally prohibited by law from paying DOD for health care. Medicare is a federally financed health insurance program for persons age 65 and over, some people with disabilities, and people with end-stage kidney disease. Eligible beneficiaries are automatically covered by part A, which covers inpatient hospital, skilled nursing facility, and hospice care, as well as home health care that follows a stay in a hospital or skilled nursing facility. They also can pay a monthly premium to join part B, which covers physician and outpatient services as well as those home health services not covered under part A. Traditional Medicare allows beneficiaries to choose any provider that accepts Medicare payment and requires beneficiaries to pay for part of their care. Most beneficiaries have supplemental coverage that reimburses them for many costs not covered by Medicare. Major sources of this coverage include employer-sponsored health insurance; “Medigap” policies, sold by private insurers to individuals; and Medicaid, a joint federal-state program that finances health care for low-income people. The alternative to traditional Medicare, Medicare+Choice, offers beneficiaries the option of enrolling in managed care or other private health plans. All Medicare+Choice plans cover basic Medicare benefits, and many also cover additional benefits such as prescription drugs. Typically, these plans have limited cost sharing but restrict members’ choice of providers and may require an additional monthly premium. Under the Medicare subvention demonstration, DOD established and operated Medicare+Choice managed care plans, called TRICARE Senior Prime, at six sites. Enrollment in Senior Prime was open to military retirees enrolled in Medicare part A and part B who resided within the plan’s service area. About 125,000 dual eligibles (military retirees who were also eligible for Medicare) lived in the 40-mile service areas of the six sites—about one-fifth of all dual eligibles nationwide living within an MTF’s service area. DOD capped enrollment at about 28,000 for the demonstration as a whole. Over 26,000 enrolled—about 94 percent of the cap. In addition, retirees enrolled in TRICARE Prime could “age in” to Senior Prime upon reaching age 65, even if the cap had been reached, and about 6,800 did so. Beneficiaries enrolled in the program paid the Medicare part B premium, but no additional premium to DOD. Under Senior Prime, all primary care was provided at MTFs, although DOD purchased some hospital and specialty care from its network of civilian providers. Senior Prime enrollees received the same priority for care at the MTFs as younger retirees enrolled in TRICARE Prime. Care at the MTFs was free of charge for enrollees, but they had to pay any applicable cost-sharing amounts for care in the civilian network (for example, $12 for an office visit). The demonstration authorized Medicare to pay DOD for Medicare-covered health care services provided to retirees at an MTF or through private providers under contract to DOD. As established in the BBA, capitation rates—fixed monthly payments for each enrollee—for the demonstration were discounted from what Medicare would pay private managed care plans in the same areas. However, to receive payment, DOD had to spend at least as much of its own funds in serving this dual-eligible population as it had in the recent past. The six demonstration sites are each in a different TRICARE region and include 10 MTFs that vary in size and types of services offered. (See table 1.) The five MTFs that are medical centers offer a wide range of inpatient services and specialty care as well as primary care. They accounted for over 75 percent of all enrollees in the demonstration, and the two San Antonio medical centers had 38 percent of all enrollees. MTFs that are community hospitals are smaller, have more limited capabilities, and could accommodate fewer Senior Prime enrollees. At these smaller facilities, the civilian network provides much of the specialty care. At Dover, the MTF is a clinic that offers only outpatient services, thus requiring all inpatient and specialty care to be obtained at another MTF or purchased from the civilian network. Compared with their access to care before the demonstration, many enrollees reported that their access to care overall—their ability to get care when they needed it—had improved. They reported better access to MTFs as well as to doctors. Although at the start of the demonstration enrollees had reported poorer access to care than nonenrollees, by the end of the demonstration about 90 percent of both groups said that they could get care when they needed it. Enrollees’ own views are supported by administrative data: they got more care than they had received from Medicare and DOD combined before the demonstration. However, most nonenrollees who had relied on MTFs before the demonstration were no longer able to rely on military health care. Most enrollees reported that their ability to get care when they needed it was not changed by the demonstration, but those who did report a change were more likely to say that their access to care—whether at MTFs or from the civilian network—had improved. (See table 2.) When asked specifically about their access to MTF care, those who had not used MTFs in the past reported the greatest improvement. (See figure 1.) About one-third of all enrollees said that their access to physicians had improved, and a significantly smaller fraction said that it had declined. For example, 32 percent of enrollees said that, under the demonstration, their primary care doctor’s office hours were more convenient, while 20 percent said they were less so. Similarly, enrollees said that they did not have to wait too long to get an appointment with a doctor and, once they reached the office, their doctor saw them more promptly. (See table 3.) For two aspects of access, however, Senior Prime enrollees’ experience was mixed. TRICARE has established standards for the maximum amount of time that should elapse in different situations between making an appointment and seeing a doctor: 1 month for a well-patient visit, 1 day for an urgent care visit, and 1 week for routine visits. According to TRICARE policy, MTFs should meet these standards 90 percent of the time. While Senior Prime met the standards for the time it took to get an appointment and see a doctor for well-patient visits (like a physical), it fell slightly short of the standard for urgent care visits (such as for an acute injury or illness like a broken arm or shortness of breath) and, more markedly, for routine visits (such as for minor injuries or illnesses like a cold or sore throat). (See table 4.) When asked about their ability to choose their own primary care doctors, enrollees were somewhat more likely to say that it was more difficult than before the demonstration. This is not surprising, in view of the fact that Senior Prime assigned a primary care doctor (or nurse) to each enrollee. However, regarding specialists, enrollees said that their choice of doctors had improved. Enrollees reported fewer financial barriers to access under Senior Prime. They said that their out-of-pocket spending decreased and was more reasonable than before. By the demonstration’s end, nearly two-thirds said that they had no out-of-pocket costs. Even at the smaller demonstration sites, where care from the civilian network, which required co-payments, was more common, about half of enrollees said they had no out-of-pocket costs. These enrollee reports of better access under Senior Prime are largely supported by DOD and Medicare administrative data. Enrollees received more services from Senior Prime than they had obtained before the demonstration from MTFs and Medicare combined. Specifically, their use of physicians increased from an average 12 physician visits per year before enrolling in Senior Prime to 16 visits per year after enrollment, and the number of hospital stays per person also increased by 19 percent. Enrollees’ use of services not only increased under Senior Prime—as did other measures of access to care—but exceeded the average level in the broader community. Enrollees used significantly more care than their Medicare fee-for-service counterparts. These differences cannot be explained by either age or health—enrollees were generally younger and healthier. Adjusted for demographics and health conditions, physician visits were 58 percent more frequent for Senior Prime enrollees than for their Medicare counterparts, and hospital stays were 41 percent more frequent. Nonetheless, enrollees’ hospital stays—adjusted for demographics and health conditions—were about 4 percent shorter. We found three probable explanations for enrollees’ greater use of hospital and outpatient care: Lower cost-sharing. Research confirms the commonsense view that patients use more care if it is free. Whereas in traditional Medicare the beneficiary must pay part of the cost of care—for example, 20 percent of the cost of an outpatient visit—in Senior Prime all primary care and most specialty care is free. Lack of strong incentives to limit utilization. Although MTFs generally tried to restrain inappropriate utilization, they did not have strong financial incentives to do so. MTFs cannot spend more than their budget, but space-available care acts as a safety valve: that is, when costs appear likely to exceed funding, space-available care can be reduced while care to Senior Prime enrollees remains unaffected. MTFs also had no direct incentive to limit the use of purchased care, which is funded centrally, and the managed care contractors also lacked an incentive, since they were not at financial risk for Senior Prime. Practice styles. Military physicians’ training and experience, as well as the practice styles of their colleagues, also affect their readiness to hospitalize patients as well as their recommendations to patients about follow-up visits and referrals to specialists. Studies have shown that the military health system has higher utilization than the private sector. Given that military physicians tend to spend their careers in the military with relatively little exposure to civilian health care’s incentives and practices, it is not surprising that these patterns of high use would persist. Although nonenrollees generally were not affected by the demonstration, the minority who had been using space-available MTF care were affected because space-available care declined. This decline is shown in our survey results, and is confirmed by DOD’s estimate of the cost of space-available care, which decreased from $183 million in 1996 to $72 million in 1999, the first full year of the demonstration. However, for most nonenrollees, this decline was not an issue, because they did not use MTFs either before or during the demonstration. Furthermore, of those who depended on MTFs for all or most of their care before the demonstration, most enrolled in Senior Prime, thereby assuring their continued access to care. (See figure 2.) Since there was less space-available care than in the past, many of those who had previously used MTFs and did not enroll in Senior Prime were “crowded-out.” Crowd-out varied considerably, depending both on the types of services that nonenrollees needed and the types of physicians and space available at MTFs. Nonenrollees who required certain services were crowded out while others at the same MTF continued to receive care. We focus on nonenrollees who experienced a sharp decline in MTF care: those who said they had received most or all of their care at MTFs before the demonstration but got no care or only some care at MTFs during the demonstration. Of those nonenrollees who had previously depended on MTFs for their care, over 60 percent (about 4,600 people) were crowded out. (See figure 3.) The small number of nonenrollees—10 percent of the total—that had depended on MTFs for their care before the demonstration limited crowd- out. (See figure 4.) Consequently, only a small proportion of all nonenrollees—about 6 percent—was crowded out. Somewhat surprisingly, a small number of nonenrollees who had not previously used MTFs began obtaining all or most of their care at MTFs. Although Medicare fee-for-service care increased for those who were crowded out of MTF care, the increase in Medicare outpatient care was not nearly large enough to compensate for the loss of MTF care. (See figure 5.) Retirees who were crowded out had somewhat lower incomes than other nonenrollees and were also less likely to have supplemental insurance, suggesting that some of them may have found it difficult to cover Medicare out-of-pocket costs. By the end of the initial demonstration period, less than half of all nonenrollees said they were able to get care at MTFs when they needed it, a modest decline from before the demonstration. Enrollees’ improved access to care had both positive and negative consequences. Many enrollees in Senior Prime reported that they were more satisfied with nearly all aspects of their care. Some results were neutral: enrollees’ self-reported health status did not change and health outcomes, such as mortality and preventable hospitalizations, were no better than those achieved by nonenrolled military retirees. However, enrollees’ heavy use of health services resulted in high per-person costs for DOD compared to costs of other Medicare beneficiaries. Satisfaction with almost all aspects of care increased for enrollees. Moreover, by the end of the demonstration, their satisfaction was generally as high as that of nonenrollees. Patients’ sense of satisfaction or dissatisfaction with their physicians reflects in part their perceptions of their physicians’ clinical and communication skills. Under Senior Prime, many enrollees reported greater satisfaction with both their primary care physicians and specialists. Specifically, enrollees reported greater satisfaction with their physicians’ competence and ability to communicate—to listen, explain, and answer questions, and to coordinate with other physicians about patients’ care. (See table 5.) Senior Prime did not appear to influence three key measures of health outcomes—the mortality rate, self-reported health status, and preventable hospitalizations. Mortality rate. Although there were slightly more deaths among nonenrollees, the difference between enrollees and nonenrollees disappears when we adjust for retirees’ age and their health conditions at the start of the demonstration. Health status. We also found that Senior Prime did not produce any improvement in enrollees’ self-reported health status. We base this on enrollees’ answers to our questions about different aspects of their health, including their ratings of their health in general and of specific areas, such as their ability to climb several flights of stairs. This finding is not surprising, given the relatively short time interval—an average of 19 months—between our two surveys. We also found that, like enrollees, nonenrollees did not experience a significant change in health status. Preventable hospitalizations. The demonstration did not have a clear effect on preventable hospitalizations—those hospitalizations that experts say can often be avoided by appropriate outpatient care. Among patients who had been hospitalized for any reason, the rate of preventable hospitalizations was slightly higher for Senior Prime enrollees than for their Medicare fee-for-service counterparts. However, when all those with chronic diseases—whether hospitalized or not— were examined, the rate among Senior Prime enrollees was lower. A less desirable consequence of enrollees’ access to care was its high cost for DOD. Under Senior Prime, DOD’s costs were significantly higher than Medicare fee-for-service costs for comparable patients and comparable benefits. These higher costs did not result from Senior Prime enrollees being sicker or older than Medicare beneficiaries. Instead, they resulted from heavier use of hospitals and, especially, greater use of doctors and other outpatient services. In other words, the increased ability of Senior Prime enrollees to see physicians and receive care translated directly into high DOD costs for the demonstration. From the perspective of enrollees, Senior Prime was highly successful. Their satisfaction with nearly all aspects of their care increased, and by the end of the demonstration enrollees were in general as satisfied as nonenrollees, who largely used civilian care. However, enrollees’ utilization and the cost of their care to DOD were both higher. Although subvention is not expected to continue, the demonstration raises a larger issue for DOD: can it achieve the same high levels of patient satisfaction that it reached in Senior Prime while bringing its utilization and costs closer to the private sector’s? We provided DOD and CMS an opportunity to comment on a draft of this report, and both agencies provided written comments. DOD said that the report was accurate. It noted that the report did not compare Senior Prime enrollees’ utilization rates with those of Medicare+Choice plans and suggested that our comparison with fee-for-service might be misleading, because it did not take account of the richer benefit package offered by Senior Prime. DOD further stated that the utilization data should cover the full 3 years of the demonstration experience and that utilization might be higher during the initial phase of a new plan. Finally, DOD stated that access and satisfaction for TRICARE Prime enrollees were adversely affected by the demonstration. CMS agreed with the report’s findings and suggested that higher quality of care might be an explanation for Senior Prime enrollees’ higher use of services. (DOD and CMS comments appear in appendixes VI and VII.) In comparing utilization rates with Medicare fee-for-service in the same areas, we chose a comparison group that would be expected to have higher utilization than Senior Prime or any other managed care plan. Fee-for- service beneficiaries can obtain care from any provider without restriction, whereas Medicare+Choice plans typically have some limitations on access. Consequently, the fact that Senior Prime utilization was substantially higher than fee-for-service utilization is striking. As mandated by law, our evaluation covers the initial demonstration period (through December 2000). We therefore did not attempt to obtain information on utilization during 2001 and, in any case, the lag in data reporting would have prevented our doing so. However, during the first 2 full years of the demonstration utilization declined slightly: outpatient visits in 2000 were 2 percent lower than in 1999. As we have reported elsewhere, site officials found little evidence that the demonstration affected TRICARE Prime enrollees’ satisfaction or access to care. Regarding the possible impact of quality of care on use of services, we examined several health outcome indicators and found no evidence of such an effect. We are sending copies of this report to the Secretary of Defense and the Administrator of the Centers for Medicare and Medicaid Services. We will make copies available to others upon request. If you or your staffs have questions about this report, please contact me at (202) 512-7114. Other GAO contacts and staff acknowledgments are listed in appendix VIII. To address the questions Congress asked about Medicare subvention, we fielded a mail survey of military retirees and their family members who were eligible for the subvention demonstration. The survey had two interlocking components: a panel of enrollees and nonenrollees, who were surveyed both at the beginning and the end of the demonstration, and two cross sections or snapshots of enrollees and nonenrollees—one taken at the beginning of the demonstration and the other at the end. To assess those questions that involved change over time, we sampled and surveyed by mail enrollees and nonenrollees, stratified by site, at the beginning of the demonstration. These same respondents were resurveyed from September through December 2000, shortly before the demonstration’s initial period ended. Because a prior report describes our initial survey, this appendix focuses on our second survey. To conduct the second round of data collection, we began with 15,223 respondents from the first round of surveys. To be included in the panel, three criteria had to be met: (1) the person must still be alive, (2) the person must still reside in an official demonstration area, and (3) the person must have maintained the same enrollment status, that is, enrolled or not enrolled. Based on these criteria we mailed 13,332 surveys to our panel sample of enrollees and nonenrollees. Starting with a sample of 13,332 retirees and their family members, we obtained usable questionnaires from 11,986 people, an overall response rate of 91 percent. (See table 6, which also shows the adjustments to the initial sample and to the estimated population size. See table 7 for the reasons for nonresponse.) To enable comparisons between enrollees and nonenrollees at the end of the demonstration, the second survey was augmented to include persons who had enrolled since the first survey as well as additional nonenrollees. The overall composition of the Senior Prime enrollee population had changed from the time of our first survey. When we drew our second sample in July 2000, 36 percent of all enrollees were new—that is, they had enrolled since our first survey—and over two-fifths of them were age-ins who had turned 65 since the demonstration started. From the time of our first survey to the time of our second survey, only 861 people had disenrolled from Senior Prime. Therefore, we surveyed all voluntary disenrollees. Data from all respondents—those we surveyed for the first time as well as those in the panel—were weighted, to yield a representative sample of the demonstration population at the end of the program. The sample for the cross section study included the panel sample as well as the augmented populations. We defined our population as all Medicare- eligible military retirees living in the demonstration sites and eligible for Senior Prime. The sample of new enrollees was drawn from all those enrolled in the demonstration according to the Iowa Foundation’s enrollment files. The supplemental sample of nonenrollees was drawn from all retirees age 65 and over in the Defense Enrollment Eligibility Reporting System who (1) had both Medicare part A and part B coverage, (2) lived within the official demonstration zip codes, (3) were not enrolled in Senior Prime, and (4) were not part of our first sample. We stratified our sample of new enrollees and new nonenrollees by site and by whether they aged in. We oversampled each stratum to have a large enough number to conduct analyses of subpopulations. The total sample for all sites was 23,967, drawn from a population of 117,618. Starting with a sample of 23,967 retirees and their family members, we obtained complete and usable questionnaires from 20,870 people, an overall response rate of 88 percent. (See table 8, which also shows the adjustments to the initial sample and to the estimated population size. See table 9, which shows the reasons for nonresponse.) Response rates varied across sites and subpopulations. Rates ranged from 95.3 percent among aged-in new enrollees to 66.7 percent among disenrollees. The original questionnaire that was sent to our panel sample was created based on a review of the literature and five existing survey instruments. In addition, we pretested the instrument with several retiree groups. For the second round of data collection, we created four different versions of the questionnaire, based on the original questionnaire. The four versions were nearly the same, with some differences in the sections on Senior Prime and health insurance coverage. (See table 10 for a complete list of all the survey questions used in our analyses.) For the panel sample, our objective was to collect the same data at two points in time. Therefore, in constructing the questionnaires for the panel enrollees and panel nonenrollees we essentially used the same instrument as the original survey to answer questions about the effect of the demonstration on access to care, quality of care, health care use, and out- of-pocket costs. However, we modified our questions about plan satisfaction and health insurance coverage. In constructing the questionnaires for the new enrollees, we generally adopted the same questions in the panel enrollee instrument to measure access to care, quality of care, health care use, and out-of-pocket costs. However, we also asked the new enrollees about their health care experiences in the 12 months before they joined Senior Prime. For new nonenrollees, we were able to use the same instrument as we had used for the panel nonenrollees, because their health care experiences were not related to tenure in Senior Prime. Finally, the disenrollee questionnaire, like the other versions, did not change from the original instrument in the measures on access to care, quality of care, health care use, and out-of- pocket costs. However, we added questions on the reasons for disenrollment. To detect the effects the demonstration had on both enrollees’ and nonenrollees’ access to care and satisfaction with care, we compared the differences between survey responses at both points in time and among each demonstration site. For most questions, retirees were asked both before the demonstration and at the end of the demonstration how much they agreed or disagreed with each statement. They were given five possible answers: strongly agree, agree, neither agree nor disagree, disagree, and strongly disagree. To calculate change, responses were assigned a numeric value on a five-point scale, with five being the highest and one being the lowest. To properly quantify the response, some scales had to be reversed. Where necessary, questions were rescaled so that “agree” represents a positive answer and “disagree” a negative answer. To obtain a measure of change, the value of the response from the first survey was subtracted from the value of the response from the second survey. A positive value indicates improvement, a negative value indicates decline. The net improvement is calculated as the difference between the proportion of respondents within each sample population who improved and the proportion of those who declined. Four separate significance tests were performed. (See table 11.) The first test was for net improvement (the difference between improved and declined) among enrollees. The second test was for net improvement among nonenrollees. The third test was for the difference of net improvement between enrollees and nonenrollees. Finally, we tested whether the net improvement for each site is significantly different from the net improvement of the other sites. (See tables 11 and 12.) In addition to the change of access and quality among enrollees and nonenrollees, we also examined the level of access and quality at the time of the second survey among the cross section sample. (See table 12.) Three separate significance tests were performed. The first test of significance was between enrollees and nonenrollees who said they strongly agreed with each statement. The second test of significance was between enrollees and nonenrollees who said they either strongly agreed or agreed with each statement. The final test was whether the site percentage differs significantly from the overall percentage. In this appendix, we describe the DOD and Medicare data that we used to analyze utilization. We also summarize the models that we developed to risk adjust acute inpatient care and outpatient care and give results both demonstration wide and by site. For these analyses, we defined the Senior Prime enrollee population as those who had enrolled as of December 31, 1999. We used DOD data for 1999 as the source of our counts of hospital stays and outpatient visits to both MTF and civilian network providers. We limited our analysis to hospital stays of 1 day or more to eliminate inconsistencies between Medicare and TRICARE in the use of same-day discharges. Our counts of outpatient utilization include (1) visits and ambulatory surgeries in MTF outpatient clinics and (2) visits to network providers— doctors’ offices, ambulatory surgeries, hospital emergency rooms, and hospital outpatient clinics. To identify our comparison group of fee-for-service beneficiaries in the demonstration areas, we used CMS’ 20-percent Medicare sample, and extracted those beneficiaries residing in the subvention areas. We excluded anyone who had been in a Medicare+Choice plan for any part of the year. To make the comparison fair, we also excluded certain groups not represented or only minimally represented in Senior Prime: persons with end-stage renal disease (ESRD), Medicaid beneficiaries, persons with disabilities (under age 65), and people who lost Medicare part A or part B entitlement for reasons other than death. We derived our counts of Medicare fee-for-service utilization for the sample from Medicare claims files. For those who were in either Senior Prime or fee-for-service for less than a full year, we estimated full-year utilization counts. We identified a separate comparison group of persons eligible for the demonstration who did not enroll. We collected both Medicare fee-for- service claims and DOD encounter data for the sample of enrollees and nonenrollees who answered both our first and second surveys. In order to compare the utilization of Senior Prime enrollees to Medicare fee-for-service beneficiaries in the demonstration areas, we developed several models of fee-for-service utilization (for hospitalization, length of stay, and outpatient care). We then applied each model to Senior Prime enrollees—taking account of their demographic characteristics and health status—to predict what their utilization would have been in Medicare fee- for-service. The ratio of their predicted utilization to their actual Senior Prime utilization gives a measure of the amount by which Senior Prime utilization exceeded or fell short of fee-for-service utilization for people with the enrollees’ characteristics. Table 13 compares the characteristics of Senior Prime enrollees with Medicare fee-for-service beneficiaries in the demonstration area. Acute hospitalization is a relatively rare event: only one out of five Medicare beneficiaries (in the counterpart 20-percent fee-for-service sample) is hospitalized during the year, and about half of those who are hospitalized are admitted again during the same year. We therefore used Poisson regression, which is designed to predict the number of occurrences (counts) of a rare event during a fixed time frame, to estimate the number of acute hospitalizations. Positive coefficients are interpreted as reflecting factors that increase the hospitalization rate while negative coefficients indicate a decrease in that rate. The strongest factor affecting the number of hospitalizations is the HCC score, which measures how ill and how costly a person is. Its effect is not linear—both squared and cubed terms enter the model. (See table 14.) Diagnostic groupings are based on the International Classification of Diseases, 9th Revision, Clinical Modification (ICD-9-CM). Endocrine, nutritional, and metabolic diseases and immunity disorders Diseases of the nervous system and sense organs Diseases of the musculoskeletal system and connective tissue Supplementary classification (V01-V82) Using the same approach and models, we examined utilization at each site. (See table 16.) Adjusting for risk, both hospital stays and outpatient visits were substantially greater in Senior Prime than in fee-for-service at all sites. However, the differences in length of stay were small, with lengths of stay generally higher in fee-for-service. “Crowd-outs” were nonenrollees who had used MTF care before the demonstration but were unable to do so after the demonstration started. In this report, we define crowd-outs as those 4,594 nonenrollees (6 percent of all nonenrollees) who had, according to their survey answers, received all or most of their care at an MTF before the demonstration but received none or only some of their care at an MTF after the demonstration started. However, as table 17 shows, crowd-out can be defined either more narrowly or more broadly. By the narrowest definition of crowd-out— those nonenrollees who received all of their care at an MTF before the demonstration but none of their care at an MTF after the demonstration started—only 1,498 persons (2 percent of all nonenrollees) were crowded out. However, if we count all those who received less care than before, 12,133 (16 percent of nonenrollees) nonenrollees were crowded out. As expected, many of the 4,594 nonenrollees whom we characterized as crowd-outs changed their attitudes toward military care during the demonstration. As shown in table 18, they reported a decline in access to MTF care as well as lower satisfaction with care in MTFs. However, they did not report significant changes in satisfaction on issues not explicitly connected to MTFs. DOD’s MTF encounter data and network claims data confirmed the self- reports of crowd-outs. The crowd-outs’ MTF outpatient care dropped dramatically during the demonstration and the increase in fee-for-service (FFS) outpatient visits was not sufficient to offset this decline. However, as shown in table 19, there was no decline in acute hospitalizations. In this appendix, we describe our methods for analyzing the effects of the subvention demonstration on three indicators of health outcomes— mortality, health status, and preventable hospitalization. Using our first survey, we calculated the mortality rate from the date of the survey response to January 31, 2001. The source of death information was the Medicare Enrollment Database. We excluded Medicare+Choice members because we could not obtain their diagnoses, which we needed to calculate risk factors. The unadjusted 2-year mortality rate was 0.06 for Senior Prime enrollees and 0.08 for nonenrollees. Although the difference is significant, it disappears when we adjust for individual risk. The adjusted 2-year mortality rate is 0.06 for both enrollees and nonenrollees. (See table 20.) We used the Cox proportional hazard model to calculate individuals’ risk- adjusted mortality rate. A hazard ratio greater than 1 indicates a higher risk of death while a hazard ratio less than 1 indicates a lower risk. For example, a hazard rate for males of 1.5 means that males are 50 percent more likely to die than females, holding other factors constant. Similarly, a hazard rate of 0.5 for retirees with HCC scores in the lowest quartile means that they are 50 percent less likely to die than those with HCC scores in the middle two quartiles, holding other factors constant. Enrollment in Senior Prime did not have a significant effect on mortality. (See table 21 for a description of the factors that entered our model and of their estimated effects.) See Ware, J. E., Kosinski, M., and Keller, S. D., SF-12: How to Score the SF-12 Physical and Mental Health Summary Scales, The Health Institute, New England Medical Center, Second Edition, pp. 12-13. The change in the score between the two times was also insignificant. We examined both the unadjusted score and the adjusted score, using a linear regression model (see table 23), but neither was significant, and enrollment in Senior Prime was not a significant factor in the model. We analyzed preventable hospitalizations—hospital stays that can often be avoided by appropriate outpatient care—using several alternate models. Specifically, we estimated the effect of Senior Prime enrollment on the likelihood of having a preventable hospitalization, adjusting for age, sex, and health conditions. Measures of a person’s health conditions included the HCC score, an index of comorbidities, and the number of recent hospitalizations. In addition, we controlled for the number of outpatient clinic and physician visits, since outpatient care is considered a means of preventing hospitalization. We analyzed data on Senior Prime enrollees and on Medicare fee-for- service beneficiaries who were not military retirees and who lived in the demonstration areas. Within this combined group of enrollees and fee-for- service beneficiaries, we modeled preventable hospitalizations for two populations: (1) those who had been hospitalized in 1999 and (2) those who had at least one chronic disease in 1999—whether they had been hospitalized or not. Our analysis of the demonstration’s effect on preventable hospitalizations yielded inconsistent results. For the first population (hospitalizations), we found that Senior Prime enrollment was associated with more preventable hospitalizations. By contrast, for the second population (the chronically ill), Senior Prime enrollment was associated with fewer preventable hospitalizations. Other GAO staff who made significant contributions to this work included Jessica Farb, Maria Kronenburg and Dae Park. Robin Burke provided technical advice and Martha Wood provided technical advice and assistance. Medicare Subvention Demonstration: DOD Costs and Medicare Spending (GAO-02-67, Oct. 31, 2001). Medicare Subvention Demonstration: DOD’s Pilot Appealed to Seniors, Underscored Management Complexities (GAO-01-671, June 14, 2001). Medicare Subvention Demonstration: Enrollment in DOD Pilot Reflects Retiree Experiences and Local Markets (GAO/HEHS-00-35, Jan. 31, 2000). Defense Health Care: Appointment Timeliness Goals Not Met; Measurement Tools Need Improvement (GAO/HEHS-99-168, Sept. 30, 1999). Medicare Subvention Demonstration: DOD Start-up Overcame Obstacles, Yields Lessons, and Raises Issues (GAO/GGD/HEHS-99-161, Sept. 28, 1999). Medicare Subvention Demonstration: DOD Data Limitations May Require Adjustments and Raise Broader Concerns (GAO/HEHS-99-39, May 28, 1999). The General Accounting Office, the investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full-text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO E-mail this list to you every afternoon, go to our home page and complete the easy-to-use electronic order form found under “To Order GAO Products.” Web site: www.gao.gov/fraudnet/fraudnet.htm, E-mail: fraudnet@gao.gov, or 1-800-424-5454 (automated answering system).
In the Balanced Budget Act of 1997, Congress established a three-year demonstration, called Medicare subvention, to improve the access of Medicare-eligible military retirees to care at military treatment facilities (MTF). The demonstration allowed Medicare-eligible retirees to get their health care largely at MTFs by enrolling in a Department of Defense (DOD) Medicare managed care organization known as TRICARE Senior Prime. During the subvention demonstration, access to health care for many retirees who enrolled in Senior Prime improved, while access to MTF care for some of those who did not enroll declined. Many enrollees in Senior Prime said they were better able to get care when they needed it. They also reported better access to doctors in general as well as care at MTFs. Enrollees generally were more satisfied with their care than before the demonstration. However, the demonstration did not improve enrollees' self-reported health status. In addition, compared to nonenrollees, enrollees did not have better health outcomes, as measured by their mortality rates and rates of "preventable" hospitalizations. Moreover, DOD's costs were high, reflecting enrollees' heavy use of hospitals and doctors.
To apply for disability benefits through either of SSA’s disability programs—DI or SSI—individuals submit a claim in-person, by telephone, mail, or online. The application and related forms ask for a description of the claimant's impairment (or impairments); sources of the claimant’s treatment, such as doctors, hospitals, clinics, and other institutions; and other information related to the disability claim. SSA assesses the claimant’s non-medical eligibility for benefits and sends the claim to a state DDS office for a review of the claimant’s medical eligibility. Although SSA is responsible for the programs, the law generally calls for initial determinations of disability to be made by state agencies. An individual meets the definition of disability for these programs if the individual has a medically determinable physical or mental impairment that (1) prevents the individual from engaging in any substantial gainful activity, and (2) has lasted or is expected to last at least 1 year or is expected to result in death. As part of the medical determination process, DDS examiners assemble medical and vocational information for the claim, including medical evidence from the claimant’s medical providers. If that evidence is unavailable or insufficient to make a determination regarding the claimant’s eligibility for benefits, the DDS office will arrange for a consultative exam to obtain additional information. DDS examiners assess the applicant’s medical condition against SSA’s Listings of Impairments (medical listings), which contains medical conditions that have been determined by the agency to be severe enough to qualify an applicant for disability benefits. Based on this assessment, a DDS examiner decides whether to medically allow or deny a claim for DI or SSI benefits. SSA began CAL in October 2008 with the stated goal of providing expedited benefit processing to those with certain medical conditions whose claims are likely to be approved. According to SSA documents, expedited benefit processing through CAL helps to lessen the emotional and financial hardship that claimants might otherwise experience as a result of delays in SSA’s disability process. At the time of its inception, the initiative was also considered a way to help reduce disability claim backlogs. SSA has expanded the number of conditions—and thus the number of claimants—which qualify for CAL over time. When the CAL list debuted, it contained 50 conditions—25 rare diseases and 25 cancers. In the years that followed, SSA added more conditions to the list in batches, eventually expanding it to 225 conditions as of April 2017. (See appendix II for the current list of CAL conditions.) CAL claims may be processed more quickly than other claims, in part because they are given priority status and requests for medical evidence to substantiate these claims can be expedited. When a claimant submits a claim for disability benefits, it is flagged as CAL if the claimant’s description of his or her impairment includes certain key words or phrases signifying the claimant has a CAL condition. Certain expedited processing rules apply to claims that are flagged for CAL. These claims are given priority in disability examiners’ and medical consultants’ queues of incoming claims, and SSA guidance directs DDS offices to initiate development within one working day of receiving a CAL claim. Examiners also use expedited procedures for requesting and following up on requests for medical evidence for CAL claims, and may only require a minimal amount of medical evidence, for example, a biopsy report, to confirm the claimant’s diagnosis of a CAL condition. To assist examiners in deciding these claims, SSA has developed detailed descriptions of each of the CAL conditions, known as impairment summaries. (See appendix III for an example of an impairment summary.) Among other things, these summaries suggest specific medical evidence for the examiner to obtain to verify the CAL condition and indicate relevant medical listings. DDS examiners assess a CAL claimant’s medical condition against SSA’s medical listings and allow or deny the claim, per the general disability determination process previously noted. CAL is one of several expedited processing initiatives SSA has implemented, consistent with SSA’s focus on the timely processing of disability claims. For example, whereas CAL applies to claims of certain medical conditions, SSA’s Terminal Illness (TERI) initiative focuses on claims involving terminal illnesses, and its Quick Disability Determination (QDD) initiative focuses on various characteristics of the case file, such as whether evidence of the claimant’s allegation(s) is expected to be readily available. Claims flagged for CAL may also be flagged as TERI or QDD. SSA’s annual performance report for fiscal years 2015 through 2017 states that the agency aims to improve the quality, consistency, and timeliness of its disability decisions to help achieve its strategic goal of serving the public through a stronger, more responsive disability program. SSA, in consultation with the Office of Management and Budget, has highlighted this objective as a focus area for improvement. From 2007, the year prior to the initiative’s inception, through 2011, SSA used public hearings to convene stakeholders and obtain information on categories of conditions identified by the agency for potential CAL consideration (see fig. 1). For example, SSA officials said that they decided to add 12 cardiac-related conditions to the CAL list on the basis of testimony received during their November 2010 hearing on cardiovascular disease and multiple organ transplants. SSA officials said that because of resource limitations, they have not convened a CAL hearing since March 2011, although they said that since that time, they have researched and added conditions to the CAL list that were suggested at the earlier hearings. Since 2011, SSA has also relied on advocates for individuals with certain diseases and disorders to bring conditions to the agency’s attention, rather than proactively and systematically reviewing conditions to identify potential additions to the CAL list. Of the 137 conditions added to the CAL list since the agency stopped holding CAL hearings in 2011, 55 conditions were based on suggestions from the hearings; suggestions from advocates, including members of the public, account for 51 conditions; and the remainder resulted from suggestions made by SSA and DDS staff as well as other researchers. Although it has relied on advocate suggestions to identify potential conditions to add to the CAL list in recent years, SSA has not clearly communicated this or provided guidance on how to make suggestions through its CAL webpage, which communicates information to the public. Advocates who are interested in having a disease or disorder considered for inclusion on the CAL list may contact SSA through a general purpose CAL email address included on the CAL webpage. While the webpage acknowledges advocates have previously recommended potential CAL conditions to SSA, it does not explicitly invite advocates to propose new conditions. Of representatives from the five advocacy organizations we interviewed that successfully had conditions added to the CAL list, representatives from four of these organizations said they had first learned about CAL through contact with SSA officials and others aware of the initiative, rather than through SSA’s website. The website also does not describe what information advocates could present to the agency that would assist SSA’s consideration of a condition. Without more explicit instructions, advocates may not provide information that is relevant for SSA’s decision-making or that most strongly makes their case. One representative from an advocacy organization, for example, described meeting with agency officials and being surprised by SSA’s focus on cancer grades—an indicator of how quickly cancer is likely to grow and spread—as she was not accustomed to discussing the condition she represents in these terms. Federal internal control standards state that agencies should use quality information to achieve their objectives. Absent clear guidance to advocates on how to make suggestions through its CAL webpage, SSA is missing an opportunity to gather quality information to inform its selection of CAL conditions. Further, SSA has also not consistently communicated with advocates who have suggested conditions to add to the CAL list about the status of their recommendations, leading to uncertainty for some. SSA officials told us that they provide a written or oral response to advocacy organizations that have suggested a condition for inclusion on the CAL list to inform them whether the condition is approved. However, we spoke with advocates who had not received such a response from SSA and who found it challenging to connect with SSA officials to obtain information about the status of their suggestion. One representative from an advocacy organization told us that she was unable to reach SSA officials to obtain any information on the status of her suggestion despite repeated attempts. In the absence of a response from SSA, she had resubmitted her condition and supporting documents to SSA every six months for three years since her initial submission in 2014. Representatives of the three other advocacy organizations we interviewed who had unsuccessfully attempted to get conditions added to the CAL list told us that they did not know if SSA’s decision was final. Federal internal control standards state that agencies should communicate quality information externally so that external parties can help the agency achieve its objectives. Without two-way communication between SSA and advocates, advocates are unclear on the status of their proposed CAL conditions and SSA may miss an opportunity to improve the quality of the information it obtains from advocates. SSA has met with advocates to share information at the advocates’ request, but has not conducted outreach efforts that are structured to reach all advocates. Since the last CAL public hearing in 2011, SSA has hosted teleconferences and webinars on CAL for 14 advocacy organizations, but because these are provided at the request of advocacy organizations, advocates need to be already aware of CAL in order to request them. Since 2007, SSA has also had a partnership with the National Organization for Rare Disorders (NORD), which serves as a liaison between SSA and the more than 260 rare disease organizations NORD represents, as well as affected patients, families, and medical professionals. NORD officials told us that they have advised their member organizations on how to approach SSA regarding the potential addition of conditions to the CAL list and how to most effectively make their case. For example, one advocate told us that when she was unable to find information on SSA’s CAL webpage about what information to include when submitting a condition to SSA for CAL consideration, she contacted NORD to learn what other member organizations had provided SSA. However, NORD membership is limited to patient groups that represent a rare condition and have medical advisors on their board, so not all advocates that may want to submit a condition to SSA for consideration have access to this resource. Relying on advocates to bring conditions to SSA’s attention introduces potential bias toward certain conditions and the possibility of missing others. Federal internal control standards state that agencies should collect complete and unbiased information and consider the reliability of their information sources. All conditions that are potentially relevant for CAL consideration may not have advocacy organizations affiliated with them, and some advocates may be unaware of CAL, potentially resulting in SSA missing some conditions that are appropriate for CAL. As a result, some conditions may have a better chance of being considered than other, equally deserving ones that are not proposed, and individuals with those conditions may have to wait longer to receive approval for disability benefits. According to some external researchers who work with SSA, an approach leveraging SSA’s administrative data may help address the bias that is introduced by only using advocates. SSA has contracted with NIH and the National Academies for research using SSA administrative data on aspects of CAL, including the identification of potential CAL conditions, and the disability determination process generally. However, to date, SSA has not contracted for research that is sufficiently targeted to generate more than a small number of additions to the CAL list. For example, as part of an interagency agreement with SSA, NIH identified 27 potential CAL conditions—25 in 2011 and 2 in 2016—but of these, SSA has only added 4 to the CAL list. NIH identified the potential CAL conditions by comparing the likelihood of death during the adjudication process for claimants with non-CAL conditions to those with CAL conditions. Although likelihood of death relates to the definition of disability, SSA officials said it is not a factor specifically considered when designating conditions as CAL, and most of the conditions identified by NIH were not approved for CAL for various reasons. For example, SSA officials told us that they did not add some of the recommended conditions to the CAL list because claims with some of these conditions could not be identified in an accurate and consistent manner based on the claimant’s description of his or her condition provided at the time the claim is submitted. In addition to the NIH research efforts, SSA has a multiyear contract with the National Academies that is focused on the disability programs’ adjudication process, rather than CAL in particular. In response to recommendations in a 2010 report from the National Academies, SSA added 4 conditions to the CAL list. SSA has generally described CAL conditions as those that “invariably qualify as allowances under the Listing of Impairments based on minimal objective medical evidence,” according to SSA officials. However, SSA has not developed or communicated clear, consistent criteria for designating conditions as CAL conditions. As previously mentioned, SSA’s website has limited information on CAL, and the agency does not include information about specific CAL condition criteria. Officials told us that they have informally considered allowance rates—the percentage of claimants asserting a certain condition who are approved for benefits—when identifying potential CAL conditions. However, SSA officials could not provide any documentation that shows that they have established an allowance rate minimum for CAL, or that they track data on allowance rates when assessing potential CAL conditions. Further, SSA officials and documents we reviewed refer to certain conditions being a good candidate for CAL if they have a high probability of being allowed, but cite inconsistent allowance rate cut-offs. For example, SSA officials told us that they aim to identify conditions for CAL in which approximately 92 percent of claimants asserting those conditions are allowed for disability benefits. However, a CAL process document states that conditions with over a 95 percent allowance rate, as well as those with 85 to 95 percent allowance rates, are considered for CAL. SSA officials also cited their ability to program the selection software used to identify claims with a CAL condition as a secondary criterion for including it on the CAL list, although they did not indicate they have clearly or consistently defined this criterion. SSA officials said this criterion is important because they aim to reduce the number of false positives—claims that are erroneously flagged by the software for CAL processing. However, neither SSA officials nor SSA’s documentation of the steps taken to evaluate CAL conditions indicated a maximum threshold for false positives. SSA also lacks a formal process for documenting its decisions on CAL conditions, as it does not have a template, checklist, or guidance—other than the medical listings—that its staff consult when preparing reports on potential CAL conditions. We reviewed 31 assessments of potential CAL conditions prepared by SSA medical consultants and found that they commented on various aspects of the conditions, including ease of identification through diagnostic testing as well as severity and rarity. However, there was no standard format used for these reports, and we were not able to determine the weight given to each of these factors nor whether all relevant information had been considered. Moreover, the reports did not cite allowance rates or the ability to program the selection software to identify these conditions as factors that were considered by the medical consultants. Because SSA does not have consistent, clear criteria or clear documentation of its decision-making, those who have proposed conditions for CAL are sometimes confused as to why these conditions are not included on the CAL list. Although SSA officials told us the agency uses allowance rates and the ability to program the selection software to identify CAL conditions, SSA officials cited different reasons for not designating conditions as CAL in communications with those who proposed the conditions. For example, in an email provided to an advocacy organization that had attempted to get a condition added to the CAL list, SSA officials wrote that the condition was not being added because its symptoms, progression, and severity were variable and individual in nature. Another advocate told us that based on conversations with SSA officials, her understanding was that there was limited space on the CAL list for conditions that were not cancer related, and that SSA considered the number of people impacted by a condition as a criterion for CAL. Further, two of the four advocates we spoke with who unsuccessfully proposed conditions for the CAL list also said that they did not understand why these conditions were not added to the list while others were. Unclear criteria and a lack of formal procedures for documenting decisions on potential CAL conditions can lead to confusion among advocates and other stakeholders and also may result in SSA missing conditions that could qualify for CAL or adding conditions for which claims are less likely to qualify as allowances or be expedited. Federal internal control standards state that agencies should define objectives in specific and measurable terms so that they are understood at all levels of the agency and performance toward achieving these objectives can be assessed. To help achieve these objectives, the standards state that agencies should also communicate key information to their internal and external stakeholders. SSA relies primarily on selection software to identify CAL disability claims based on a word-search of the impairment description included in a claim for benefits. However, the software cannot identify all claimants asserting CAL conditions in part because text provided by claimants may be ambiguous, incomplete, or inaccurate. This is because the same medical condition might be abbreviated or described in different ways by different claimants. For example, in our review of 74 claim files, we found one claim with the description “Stage 4 breast cancer/brain/lung/liver/ kidney cancer” that was identified by the selection software for an advanced stage of lung cancer, among other CAL conditions, whereas claims describing “lung mass large mediastinal mass with suspected…brain metastis,” “lung cancer stage 3-4,” and “Lung Cancer terminal” were not identified by the software for CAL. DDS officials we interviewed in 4 of 6 offices similarly said that some claims may not include a complete description of the condition or use the correct medical terms. A related challenge to identifying CAL conditions with the software is that some CAL conditions specify a certain disease stage or severity, but the text in the claim may not provide that information. For example, claimants with non-small cell lung cancer must be at a stage IIIB or IV level of severity to qualify for CAL. SSA officials said that some applicants may not indicate the full extent of their impairments on their disability claim because they may not have come to terms with the gravity of their condition. Because some claimants misspell words describing their conditions, the selection software may also omit a CAL flag on claims that should be flagged. For example, in its work with SSA on CAL, NIH found more than 170 misspellings of “adenocarcinoma,” a type of cancerous tumor that is present in some CAL conditions. In our claim file review, we found a claimant asserting a leiomyosarcoma, a soft tissue tumor that may be found in organs including the liver, lungs, and uterus, who misspelled the term as “leiomysarcoma” on the disability claim, which resulted in the software not flagging the claim as CAL, although liver and lung cancers are CAL conditions. SSA’s Office of the Inspector General found this same issue in its 2010 report on CAL, as 60 percent of sampled claims appeared to assert a CAL condition but did not use the correct spelling or provide enough detail for SSA’s systems to automatically identify the claims as CAL. After the Office of the Inspector General’s report, SSA took steps to try to address the software’s limitations related to misspellings. (See fig. 2.) SSA, through an inter-agency agreement with NIH, initiated an effort in 2016 to improve the current selection software, specifically to reduce the number of false CAL positives and negatives, among other goals. According to NIH officials we spoke with, there are necessary tradeoffs between aiming for precision in the selection software that could exclude eligible claimants and inclusiveness that could flag claims that are not, in fact, CAL. As part of their ongoing analysis, NIH officials have identified strengths, and also limitations of some of the existing rules for the selection software, such as the key words and phrases that prompt the software to add a CAL flag, and recommended improvements to enhance the accuracy of the selection software. In March 2017, SSA officials told us that they had not yet determined if suggestions NIH had made would be included in updates to the selection software. DDS officials we interviewed also indicated that they have noted instances where some claims are inaccurately flagged for CAL due to claimants’ descriptions of their conditions in their claims. Officials we interviewed at 5 of 6 DDS offices said that they have seen claims inaccurately flagged for CAL when the claim text included words like “family history of ” though the CAL condition was not the claimant’s current asserted condition. Further, an official at one DDS office stated that some claims with “pancreatitis” or “pancreatic pain” have been incorrectly flagged for the CAL condition “pancreatic cancer.” The official noted that the software appeared to identify CAL conditions using words from the claim text out of order or without regard to specific phrases. In addition, officials at 4 of 6 DDS offices we spoke with said that they had processed claims in which they believe representatives or claimants coached by representatives added “please consider this case as CAL,” or certain key words, to the claim in an attempt to get the claim flagged as CAL. While some of the key terms may have been added appropriately, others may have been added with the intent of having the software flag a claim as CAL though the claimant was not asserting a CAL condition. For example, officials with one DDS office said that they had seen evidence that representatives had coached claimants to include key words, such as “liver” and “cancer” in their claims in the hopes of getting them flagged for CAL and allowed for benefits quickly, though the claimants may not have had “liver cancer,” which is a CAL condition. Although DDS officials’ observations about weaknesses in the software could assist SSA in improving the software’s accuracy in identifying CAL claims, SSA officials told us they have not asked DDS offices for input on the software, as they have not established a feedback loop to capture observations from DDS officials on weaknesses in the software. According to federal internal control standards, quality information about the agency’s operational processes should flow up the reporting lines from personnel to management to help management achieve the agency’s objectives. Absent a mechanism to gather feedback from DDS offices nationwide, the agency may be missing an opportunity to gather important information that could help improve the software. DDS offices play an important role in helping to ensure that claims are correctly flagged for CAL since the selection software’s effectiveness in identifying claims is impacted by the imprecise information submitted by some claimants. Further, ensuring claims are correctly flagged for CAL is important because the CAL flag reduces DDS processing time by about 10 weeks on average compared to the processing time for all claims, according to SSA data. SSA guidance directs DDS examiners to take steps to manually correct the CAL flag if they notice it has been incorrectly applied or omitted. For example, at one DDS office, examiners we interviewed said that a case asserting stage 4 cancer was not flagged for CAL by the software, but after reviewing the medical evidence, the examiners determined that the claimant had breast cancer—a CAL condition—and notified the supervisor to manually add a CAL flag to the claim. Similarly, these examiners described other claims that had been flagged for CAL by the software, but the medical evidence did not support the condition reported by the claimant, so they requested their supervisor remove the CAL flag. (See fig. 3.) SSA’s guidance includes a description of manual actions that can be taken by DDS staff to add, modify, remove, or reinstate a CAL flag on a claim; however, the guidance does not clarify when during the process these actions should take place, and we found that the point at which these changes occur during claim processing varies across DDS offices. For example, the information provided on removing a CAL flag includes instructions on the mechanical process for removing the flag based on the DDS examiner’s review of the medical evidence in the claimant’s file, but the guidance does not indicate how quickly this should be done after CAL status is clarified. SSA officials said that DDS officials have discretion to determine whether and when to remove a CAL flag, although SSA guidance advises DDS officials to remove the flag when it is not applicable. According to internal control standards, agencies should record transactions in an accurate and timely fashion, and communicate quality information throughout the agency. However, based on our discussions with the 6 selected DDS offices, we found that some examiners did not understand the importance of making timely changes to a CAL flag designation to ensure faster claim processing for the appropriate claims and accurate tracking of CAL claims. For example, examiners at one DDS office said that they do not always add or remove a CAL flag when they determine a claim is erroneously designated because it adds another step to claim processing and the step seems unnecessary. In addition, an examiner at another DDS office told us that she will delay removal of an erroneous CAL flag from a claim in order to provide faster service through claims processing. Without clear guidance on when to make manual changes, DDS examiners may continue to take actions that are not timely and may hinder expedited processing for appropriate claims and accurate tracking of CAL claims. In addition, our analysis of SSA’s data shows that DDS offices varied in their use of manual actions to add the CAL flag to claims. Specifically, we noted that over half of DDS offices nationwide that processed disability claims in fiscal year 2016 had one or zero claims with a manually added CAL designation in that year. In comparison, 5 DDS offices together accounted for over 50 percent of all claims with a manual addition. According to officials at one DDS office, one potential reason for such variance is that some examiners may be more knowledgeable about CAL than others. Specifically, staff said that less experienced examiners are at risk of not noticing claims in the general queue that should be flagged for CAL because they are less familiar with CAL conditions. Because SSA has not undertaken a study of its manual action procedures on such claims, it is unclear why this variance exists among DDS offices. Such variance could result in some claimants who assert a CAL condition not receiving expedited processing because their claims were not identified by the selection software or DDS examiners as CAL. By not analyzing these trends across DDS offices, SSA management is missing an opportunity to identify CAL conditions that more frequently require the manual addition of a CAL flag. Such an analysis could prompt consideration of ways to improve the selection software so the software flags these cases. Federal internal control standards state that agencies should establish and operate monitoring activities to monitor operations and evaluate results. SSA has various procedures in place, including the use of detailed CAL condition descriptions, to help ensure the accuracy and consistency of CAL claims decisions. SSA officials stated that the agency directs all DDS offices to follow the same procedures and to assign experienced examiners to process CAL claims. Further, to ensure the accuracy of CAL claims, as with non-CAL claims, both SSA and the DDS offices conduct quality assurance reviews of claim decisions. SSA also offers guidance and training. For CAL conditions in particular, because some of these are rare and seen infrequently by examiners, SSA developed impairment summaries—detailed descriptions of CAL conditions—to help ensure accurate and consistent claim decisions. As previously noted, the impairment summaries suggest specific medical evidence for the examiner to obtain to verify the claimant’s asserted condition. Additionally, the summaries describe the CAL condition; provide alternate names, information on diagnostic testing and coding, and treatment options and disease progression; and reference relevant medical listings under which the claim may be allowed. SSA officials said that the impairment summary presents clear, easy to access, and relevant information for examiners to consider when making a decision, although the decision to allow or deny the claim remains with the examiner. (See figure 4, as well as appendix III for examples.) Officials we interviewed at 6 selected DDS offices said that CAL impairment summaries are a key tool they consult when making determinations on CAL claims, and several described these as helping them to make decisions more efficiently. For example, they said whereas an examiner might typically conduct an online search to learn about an unfamiliar condition, the summaries are desk guides that are intended to provide a more authoritative source of information relevant for evaluating a claim. Although CAL impairment summaries are a key tool used by examiners to make a decision on whether to allow or deny a CAL claim, SSA has not regularly updated the impairment summaries. This is because SSA does not have a process for regularly updating all of these summaries. As a result, since the initiative’s inception in fiscal year 2009, about two-thirds of the summaries have not been updated. Specifically, since fiscal year 2009, SSA has updated impairment summaries for 74 of the current 225 CAL conditions, or about 33 percent. As of March 2017, we found that impairment summaries for 157 of the 225 conditions, or about 70 percent, are at least 3 years old, and among these, 69 conditions, or about 31 percent, were 5 or more years old (see fig. 5). SSA officials said that they update CAL impairment summaries in conjunction with agency updates to the medical listings used for all disability claims; however, this approach leaves the majority of CAL impairment summaries without updates. For example, changes to the neurological listings in September 2016 prompted SSA to update the 62 CAL condition impairment summaries that reference the neurological listings. However, according to SSA officials, none of these revisions were substantive changes to the impairment summaries, but rather updates to the relevant medical listing numbers. The updates did not pertain to the descriptions of the specific CAL conditions, such as information on diagnostic testing, treatment options, and disease progression, or the suggested medical evidence of record for confirming the condition’s diagnosis. In general, the medical listings are a broad guide that applies to all disability claims and does not provide the type of detailed information found in the impairment summaries. The listings are organized into 14 major body systems for adults and describe relevant conditions in each system, but they do not include an exhaustive list of all relevant conditions. For example, the broader category of neurodegenerative disorders of the central nervous system is included in the medical listings, and the CAL condition Huntington’s disease is mentioned in the listings as an example of this category of impairments. However, stiff person syndrome, another CAL condition that SSA also considers a neurodegenerative disorder of the central nervous system, is not named in the medical listings. According to our analysis of SSA data, we found that one-quarter of SSA’s CAL conditions directly align with specific medical listings. As such, three-quarters of the CAL conditions would not have their impairment summaries updated if SSA relies solely on this approach. SSA officials said that they also rely on advocates, as well as DDS and SSA staff, to bring needed updates to the impairment summaries to the agency’s attention. However, we found that since 2008, this approach has led to updates to few conditions. Advocates have provided updated information to SSA for four CAL conditions and two SSA staff have prompted updates to two CAL conditions. Furthermore, three of six advocates we interviewed were unaware that they could suggest updates to the impairment summaries to SSA, although they had relevant information or expertise to offer the agency. For example, officials from one of these advocacy organizations said that since the condition their advocacy organization represents was added to the CAL list, medical laboratories have started using a screening tool to rule out the presence of the condition, and they would have sought to have this information added to the condition’s impairment summary if they knew such updates were encouraged. Other external entities also may have relevant information that could assist SSA in updating the impairment summaries. For example, consistent with SSA’s approach for updating the medical listings, two experts who worked with the National Academies on efforts to improve SSA’s disability determination process suggested that SSA could use external medical experts to recommend updates to the impairment summaries for CAL conditions. Several advocates (4 of 6) and medical experts (2 of 3) we interviewed suggested that the impairment summaries should be updated every 1 to 3 years because medical research and advancements may have implications for disability determinations. For example, an official from an advocacy group representing aplastic anemia told us that SSA should reevaluate impairment summary information for this condition at least once every 3 years because scientific research for treating this disease is under continuous development. Officials from an advocacy group representing early onset Alzheimer’s stated that it is useful to scan for updates in medical research for this condition once per year because there is much research underway and there have been changes in how the condition is diagnosed in recent years. Federal internal control standards also state that as changes in the agency’s environment occur, management should make necessary changes to the information requirements to address the modified risks. Given the pace of medical research for certain CAL conditions, in the absence of a systematic and regular mechanism to update CAL impairment summaries, SSA potentially faces the risk of making inaccurate and inconsistent disability determinations based on outdated information. SSA and DDS officials review some data to monitor CAL claims processing, but these efforts are limited in ensuring accuracy and consistency of decisions on CAL claims. SSA prepares a monthly report for SSA’s high-level executives that includes the total number of CAL claims, claims flagged for CAL by the selection software, and claims manually flagged by staff as CAL. This report does not provide information on the accuracy and consistency of CAL claims decisions. SSA officials from one of the offices that we spoke to that receives the report said that they were not familiar with it, suggesting it may not be regularly reviewed. Further, while managers at the 6 DDS offices we selected use available data to monitor the performance of disability claims processing, they generally do not use these data to identify issues and challenges related to CAL claims decisions. For example, officials from 5 of the 6 DDS offices we interviewed said that they do not use available data to specifically monitor CAL claims. Officials we spoke with in the 1 DDS office that uses available data to monitor CAL said that they review the timeliness of CAL claims processing to evaluate examiners’ individual performance, but they did not indicate that the data were used to identify trends or challenges related to CAL. SSA officials said that CAL has been viewed as low risk, and management has confidence in the process in part because of findings related to CAL claims processing accuracy. The agency conducted a study in 2009 that found that CAL claims had a higher accuracy rate than other types of disability claims. According to SSA officials, based on these results, SSA decided not to perform additional CAL studies. However, the 2009 study was conducted at a time when there were 50 CAL conditions, whereas there were 225 CAL conditions as of April 2017. In addition, as previously noted, SSA relies on the agency’s quality review sampling procedures to review the accuracy of disability determinations, including those for CAL claims, on an ongoing basis. Yet, the sample selected for quality review is intended to reflect all disability decisions, and therefore, review findings are not generalizable to all CAL claims. In our analysis of SSA’s available data, which SSA does not leverage to assess CAL, we found evidence of challenges that may affect the accurate and consistent adjudication of claims with certain CAL conditions. For example, our analysis of SSA’s data on denial rates for CAL conditions showed that certain conditions may be challenging to accurately and consistently adjudicate, and advocates we spoke to who represent these conditions explained why challenges may exist. While the vast majority of CAL claims are allowed—about 92 percent in fiscal year 2016, data we reviewed on claims adjudicated in that year showed 37 conditions for which claims asserting these had a greater than 30 percent denial rate, including 17 conditions for which claims asserting these had a greater than 50 percent denial rate. We spoke with officials from advocacy groups representing two of the asserted conditions with high denial rates and found that issues with identifying these conditions may lead to challenges with accurately and consistently adjudicating claims with those conditions. For example, in fiscal year 2016, 34 percent of claimants that alleged they had aplastic anemia were denied. Officials from an advocacy group for aplastic anemia sufferers told us that this CAL condition is frequently confused with anemia, a much more common and non-life threatening condition that would be less likely to result in an allowance decision. They said aplastic anemia is a rare condition, affecting about 1,500 new patients per year, and is difficult to identify. In addition, 37 percent of claimants who alleged they had adult non-Hodgkin lymphoma were denied in fiscal year 2016. Officials from a lymphoma research and advocacy organization suggested that the CAL condition of adult non-Hodgkin lymphoma may be too broadly defined in SSA’s impairment summary. They said that there are 98 sub-types of adult non- Hodgkin lymphoma, so a disability examiner may not make an accurate disability decision without the appropriate contextual information about the different sub-types. Further, we found that denial rate data in combination with processing time data point to CAL conditions with claims that could be more challenging to adjudicate. Specifically, our review of SSA data showed a 21 percent denial rate for early-onset Alzheimer’s disease claims for fiscal year 2016. In addition, in our case file review, we identified three CAL claims of early-onset Alzheimer’s disease that had longer than average processing times, in which DDS staff had requested additional psychological evaluations before making determinations on the claims. DDS officials we spoke to confirmed challenges adjudicating claims for this condition exist. Although SSA officials indicated that they select conditions for the CAL list for which a disability decision can be made on the basis of minimal objective medical evidence, officials we interviewed from 2 of 6 DDS offices said claims with early-onset Alzheimer’s disease can be challenging to adjudicate because the claimant’s medical evidence is not always sufficient to confirm the diagnosis. For example, a general practitioner may not have performed a detailed neuropsychological evaluation when the claimant was diagnosed. When the medical evidence in a claimant’s file is insufficient on its own to allow for a determination, DDS officials may request additional medical evaluations, which adds processing time. If sufficient evidence of a qualifying disability cannot be obtained, the claim will be denied. Through our analysis of SSA’s average CAL claim processing time by DDS office and our discussions with selected DDS offices, we also found that potential misunderstandings of CAL guidance may cause inconsistency in the CAL claims decision-making process. For example, officials from 1 of the 6 DDS offices did not expedite medical information requests for CAL claims even though SSA guidance instructs DDS offices to do so. As a result, a claimant at this office will likely experience a longer wait time for a disability decision than a claimant with the same CAL condition at a DDS office that follows SSA guidance. Specifically, this DDS office had an average processing time of nearly 6 weeks for CAL claims, compared to the national average of about 2 weeks for CAL claims in fiscal year 2016. According to federal internal control standards, management should obtain relevant data based on identified information requirements and process these data into quality information that can be used to make informed decisions and evaluate the agency’s performance in achieving key objectives and addressing risks. SSA collects potentially useful and informative data on CAL, such as allowance and denial rates for claims by condition, as well as claims processing time data. Without regular analyses of available data to identify potential challenges to accurate and consistent CAL decision-making, SSA risks missing opportunities to address such challenges through guidance, training, or other methods. CAL is viewed positively by SSA and many stakeholders, and appears to be effectively expediting benefit processing for disability claims receiving this designation. However, because SSA has considered the initiative to be working well, it has monitored the initiative less actively, and as a result, there are weaknesses in CAL that likely result in unintended consequences. For example, because of SSA’s recent reliance on advocates to propose new CAL conditions, some conditions may have a better chance of being considered than other, equally deserving ones that are not proposed. Further, SSA has not provided clear guidance to advocates regarding information needed for the agency to consider a condition, effectively communicated the agency’s decisions to those who have proposed conditions, nor fully utilized research to identify new CAL conditions. As a result, SSA is currently missing opportunities to gather quality information to inform its selection of conditions. In addition, SSA lacks clear, consistent criteria for designating conditions as CAL, and as a result, may miss conditions that could qualify for CAL or add conditions for which claims are less likely to qualify as allowances or be expedited. Further, because conditions that are designated as CAL allow claimants with these conditions to receive priority over other claimants, limitations in the CAL condition selection process raise potential equity considerations. For those claimants who assert conditions that SSA has designated as CAL conditions, SSA’s processes for identifying their claims as CAL and ensuring they receive accurate and consistent decisions also have limitations that potentially lead to unintended consequences. Because the agency has missed opportunities to clarify guidance and use available information to improve both its selection software and manual process for identifying CAL claims, consistent access to expedited processing for these claims is hindered. As a result, some who should benefit from expedited CAL processing do not and others may be benefitting who should not be. Further, although SSA has provided DDS examiners with CAL impairment summaries—an important tool to assist them in making accurate and consistent decisions on CAL claims—because the agency has not systematically and regularly updated these summaries, examiners risk making inaccurate and inconsistent disability determinations based on outdated information. Finally, although SSA collects useful data on CAL claims, because the agency does not regularly analyze these data to identify potential challenges to accurate and consistent CAL decision-making, SSA is missing opportunities to address such challenges. In the absence of improvements to SSA’s implementation of CAL, some individuals with CAL conditions will inadvertently wait longer to receive approval for disability benefits— hindering SSA’s goal of moving these claimants that invariably qualify for benefits quickly through the process. We recommend that the Acting Commissioner of Social Security take the following actions to ensure expedited processing of disability claims through CAL is consistent and accurate: 1. Develop a formal and systematic approach to gathering information to identify potential conditions for the CAL list, including by sharing information through SSA’s website on how to propose conditions for the list and by utilizing research that is directly applicable to identifying CAL conditions. 2. Develop formal procedures for consistently notifying those who propose conditions for the CAL list of the status of their proposals. 3. Develop and communicate internally and externally criteria for selecting conditions for the CAL list. 4. Take steps to obtain information that can help refine the selection software for CAL claims, for example by using management data, research, or DDS office feedback. 5. Clarify written policies and procedures regarding when manual addition and removal of CAL flags should occur on individual claims. 6. Assess the reasons why the uses of manual actions vary across DDS offices to ensure that they are being used appropriately. 7. Develop a schedule and a plan for updates to the CAL impairment summaries to ensure that information is medically up to date. 8. Develop a plan to regularly review and use available data to assess the accuracy and consistency of CAL decision-making. We provided a draft of this report to SSA for review and comment. In its written comments, reproduced in appendix IV, SSA agreed with our eight recommendations. SSA officials stated that they are committed to looking for opportunities to strengthen CAL. In addition, SSA provided technical comments, which we incorporated as appropriate. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Acting Commissioner of Social Security, appropriate congressional committees, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report please contact me at (202) 512-7215 or larink@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix V. We were asked to review several aspects of the Compassionate Allowance initiative (CAL). This report examines the extent to which the Social Security Administration (SSA) has procedures for (1) identifying conditions for the CAL list; (2) identifying claims for CAL processing; and (3) ensuring the accuracy and consistency of CAL decisions. To better understand CAL and address these objectives, we reviewed relevant federal laws and regulations, as well as SSA policies, procedures, training materials, and other guidance for CAL. We reviewed relevant information, such as transcripts, from SSA’s seven CAL public hearings held between 2007 and 2011, and information from SSA on the number and sources of conditions added to the CAL list over time, as well as the number and sources of updates to the CAL impairment summaries. We also reviewed 31 assessments of potential CAL conditions by SSA medical and psychological consultants, as well as medical policy analysts, including 15 for conditions that were added to the list and 16 for conditions that were not added to the list. Further, we reviewed prior relevant SSA, SSA Office of Inspector General, and GAO reports related to CAL and SSA’s medical listings. We assessed SSA’s actions against its internal guidance and GAO’s published standards for internal controls in the federal government. In addition, we analyzed management information data relevant to CAL; interviewed SSA and disability determination services (DDS) staff, advocates, and medical and disability experts with relevant research organizations; and reviewed a non-generalizable sample of disability claim files, as discussed more fully below. We analyzed SSA data on CAL from SSA’s Management Information Disability database, including the number of allowance (approval) and denial decisions for disability claims identified with CAL conditions from fiscal year 2009, when CAL was implemented, through fiscal year 2016. We analyzed these data to determine the percentage of CAL claims with allowance decisions, and to identify conditions with high absolute numbers of claims allowed and denied, as well as those with high allowance and denial rates. We also analyzed SSA data on the average processing time of CAL claims and all claims overall at the initial determination level, as well as the average processing time of CAL claims for each of the 6 DDS offices we selected (see below for how we selected these offices). For processing time data, we focused on the length of time between when a claim is transferred to the DDS office and when a determination is made. To learn more about DDS office use of manual actions related to CAL, we also analyzed the number of CAL claims with manual additions, overall and by DDS office, from fiscal year 2016 and compared this to the total number of CAL claims received in that year. Manual actions include manual additions, removals, modifications and reinstatements of the CAL flag. For our analyses, we focused primarily on the number of manual additions of the CAL flag by DDS office. We assessed the reliability of these data by interviewing knowledgeable SSA officials and reviewing related documentation and internal controls. We also conducted a claim file review, described below, to further assess the reliability of CAL management data. We determined these data were sufficiently reliable for our purposes. To gather information on how SSA identifies conditions for the CAL list, identifies claims for CAL processing, and ensures the accuracy and consistency of CAL decisions, we conducted interviews with staff from SSA headquarters and select DDS offices. Specifically, we interviewed staff from SSA’s Office of Disability Policy; Office of Disability Determinations; Office of Research, Demonstration, and Employment Support; Office of Applications and Supplemental Security Income Systems; and the Office of Quality Review. In addition, we conducted interviews with DDS examiners, supervisors, and quality review staff from 6 DDS offices: Austin, Texas; Bismarck, North Dakota; Columbus, Ohio; Fairfax, Virginia; Raleigh, North Carolina; and Stockton, California. We selected these offices primarily based on SSA region (to ensure geographic dispersion), and to provide variation in the number of CAL claims receiving initial determinations and the proportion of CAL claims compared to the DDS’s overall caseload. We also aimed to include offices with varied numbers of claims that had a CAL flag manually added. The views of staff from these DDS offices are not generalizable to all DDS offices nationwide. To gain additional perspectives from DDS and SSA field office staff, we also interviewed officials from the National Association of Disability Examiners and the National Council of Social Security Management Associations, respectively. To gather additional information on implementation of CAL, we interviewed representatives from disease and disorder patient advocacy groups, selected based on their affiliation with asserted CAL conditions with high allowance or denial rates, as well as with conditions that SSA considered but did not add to the CAL list. Specifically, we interviewed representatives from the Aplastic Anemia and MDS International Foundation, Alzheimer's Association, Desmoid Tumor Research Foundation, Huntington’s Disease Society of America, Lymphoma Research Foundation, M-CM Network, National MPS Society, National Organization of Rare Disorders, and Parents and Researchers Interested in Smith-Magenis Syndrome. In total, eight of these nine organizations had suggested at least one condition to SSA for inclusion on the CAL list. Five of the nine organizations had one or more condition added by SSA, and four had one or more condition not added to the list. Another one of the organizations represented an asserted CAL condition with a high denial rate; SSA had consulted this organization for information about the condition in the past, although the group had not proposed the condition for the list. The views of the selected advocates we interviewed are not generalizable to all advocates who have interacted with SSA regarding CAL. We also interviewed medical experts from the National Institutes of Health, which has performed work to identify potential conditions for the CAL list and refinements for the selection software, under an inter-agency agreement with SSA. SSA has also contracted with the National Academies of Sciences, Engineering, and Medicine (National Academies) to recommend improvements to the disability determination process, among other things, and therefore we interviewed medical and disability experts who have served on relevant National Academies committees. We conducted a non-generalizable review of 74 claim files with fiscal year 2016 initial determinations to confirm our understanding of how claims are identified as CAL by the selection software and manually by DDS officials and to assess the reliability of CAL management data. Our sample included claims for Disability Insurance (DI) and Supplemental Security Income (SSI) benefits. We sampled claims from the following four categories: 1. claims in which the CAL flag was manually added to the claim; 2. claims in which the CAL flag was manually removed from the claim; 3. claims that involved the four asserted CAL conditions with the most denied claims and denial rates of 20 percent or greater in fiscal year 2016; and 4. claims with the specific asserted CAL conditions that staff from the six selected DDS offices we interviewed said were challenging to adjudicate. To sample claim files, we worked with SSA staff to create custom data queries from SSA’s Management Information Disability database to extract claims that fit the criteria for each of the four categories above and used a random number generator to select claims from each category. Where possible, we also accessed information on claims through the Policy Feedback System, a web-based case management program that is updated daily with data from SSA’s Structured Data Repository. We only sampled claims that fit at least one of the above criteria. For example, if there were no claims at a specific DDS office that fit our criteria for category 1, we did not record any claims in our data collection instrument and made a note that no claims existed. The details of our claim sampling methodology for each category are described below. 1. Claims in which the CAL flag was manually added to the claim For category 1, we identified the three conditions that most frequently resulting in a CAL flag being manually added to a claim. For fiscal year 2016, these were lung cancer (metastases, recurrent, inoperable, unresectable), acute leukemia, and head and neck cancer (distant metastases, inoperable, unresectable). For each of the six selected DDS offices, we randomly sampled two claims, as available, that included any of these conditions that had the CAL flag manually added, and two claims, as available, that included any of these conditions that did not have a CAL flag. 2. Claims in which the CAL flag was manually removed from the We repeated the same methodology for category 2 but analyzed the three conditions that most frequently related to a CAL flag being manually removed from a claim to better understand potential reasons why a CAL flag may be applied incorrectly and what information DDS officials might use to identify this. For fiscal year 2016, these conditions were adult non- Hodgkin lymphoma, early-onset Alzheimer’s disease, and breast cancer (distant metastasis or recurrent). For each of the six selected DDS offices, we randomly sampled two claims, as available, that included any of these conditions that had the CAL flag manually removed, as well as two claims, as available, that had the CAL flag applied by the selection software but was not removed. We also randomly sampled one claim from each DDS office that had one of these three conditions and was denied, as available. 3. Claims that involved the four asserted CAL conditions with the most denied claims and denial rates of 20 percent or greater proportion of denied claims to denied and allowed claims in FY 2016 For category 3, we used SSA management information data to determine asserted CAL conditions with 1) the most denied claims adjudicated in FY 2016 and also 2) the high denial rates (20 percent or greater proportion of denied claims to denied and allowed claims). We selected the first four conditions that met these criteria: Adult Non-Hodgkin Lymphoma (37 percent), Aplastic Anemia (34 percent), Early-Onset Alzheimer's Disease (21 percent), and Idiopathic Pulmonary Fibrosis (21 percent). For each of these conditions, we randomly sampled one allowed claim and two denied claims that were adjudicated in any DDS office nationwide. 4. Claims with specific asserted CAL conditions that staff from selected DDS offices we interviewed said were challenging to adjudicate During our interviews with staff from selected DDS offices, we requested information on particular asserted CAL conditions that each office’s staff found difficult to adjudicate: Adult Non-Hodgkin Lymphoma (two offices identified this condition), Leukemia (one office identified this condition), Head and Neck Cancer (two offices identified this condition), and Early- Onset Alzheimer’s Disease (two offices identified this condition). For category 4, we queried claims adjudicated at each of the selected DDS offices specifically asserting the condition or conditions officials at that office noted as challenging, then we calculated the average number of days it took to adjudicate claims for this condition at the particular DDS office. We randomly sampled two claims, as available, that took longer than the average number of days to adjudicate for each condition. For each of the sampled claims, we reviewed summary information from the electronic claim file including: if the claimant was applying for SSI, DI, or both benefit programs; if the claim was also flagged for the Quick Disability Determination (QDD) fast-track initiative; the decision (allowance or denial); claim filing date; decision date; age of the claimant; and the name, state and region of the adjudicating DDS office. In addition, we reviewed if there was a CAL flag present, CAL condition name (as applicable), and if the CAL flag was manually added or removed. We also reviewed the alleged impairment(s) and our observations on the Medical Evidence of Record that were recorded in the claim file. Appendix II: Social Security Administration’s 225 Compassionate Allowance Initiative (CAL) Conditions as of April 2017 Adrenal Cancer with distant metasteses or inoperable, unresectable, or recurrent Alexander Disease (ALX) - Neonatal and Infantile Amyotrophic Lateral Sclerosis (ALS) – Adult Anaplastic Adrenal Cancer - Adult with distant metasteses or inoperable, unresectable or recurrent Bladder Cancer with distant metastases or inoperable or unresectable Breast Cancer with distant metastases or inoperable or unresectable Carcinoma of Unknown Primary Site Caudal Regression Syndrome – Types III and IV Cerebro Oculo Facio Skeletal (COFS) Syndrome Child Neuroblastoma with distant metastases or recurrent Chronic Idiopathic Intestinal Pseudo Obstruction Chronic Myelogenous Leukemia (CML) - Blast Phase Cornelia de Lange Syndrome – Classic Form Ependymoblastoma (Child Brain Cancer) Frontotemporal Dementia (FTD), Picks Disease -Type A- Adult Galactosialidosis - Early and Late Infantile Types Hemophagocytic Lymphohistiocytosis - Familial Type Hypophosphatasia - Perinatal (Lethal) and Infantile Onset Types Infantile Free Sialic Acid Storage Disease Junctional Epidermolysis Bullosa -Lethal Type Late Infantile Neuronal Ceroid Lipofuscinoses Malignant Brain Stem Gliomas -Childhood Menkes Disease - Classic or Infantile Onset Form Merkel Cell Carcinoma - with metastases Merosin Deficient Congenital Muscular Dystrophy Metachromatic Leukodystrophy (MLD) - Late Infantile Myoclonic Epilepsy with Ragged Red Fibers Syndrome Neurodegeneration with Brain Iron Accumulation - Type 1 and Type 2 Oligodendroglioma Brain Cancer - Grade III Ornithine Transcarbamylase (OTC) Deficiency Orthochromatic Leukodystrophy with Pigmented Glia Osteosarcoma, formerly known as Bone Cancer - with distant metastases, or inoperable or unresectable Ovarian Cancer - with distant metastases or inoperable or unresectable Pelizaeus-Merzbacher Disease - Classic Form Peripheral Nerve Cancer - metastatic or recurrent Primary Central Nervous System Lymphoma Prostate Cancer – Hormone Refractory Disease - or with visceral metastases Severe Combined Immunodeficiency - Childhood Small Cell Cancer (Large Intestine, Prostate, or Thymus) When a claim has been identified as asserting a CAL condition, the Social Security Administration’s (SSA) system that transfers claim information from the field office to the Disability Determination Services (DDS) office automatically links the claim to a detailed description of this condition, referred to as an impairment summary. This summary describes the CAL condition; provides alternate names, information on diagnostic testing and coding, and treatment options and disease progression; suggests medical evidence of record for confirming the diagnosis; and references relevant medical listings, as shown below in table 1 for Amyotrophic Lateral Sclerosis (ALS). SSA officials said examiners may use their judgment in evaluating a CAL claim, but that the impairment summary presents relevant information for them to consider when making a decision. For example, the description of ALS in the related impairment summary explains how the condition typically impacts function and presents related research findings. SSA maintains impairment summaries for each of the 225 CAL conditions. (For a complete list of CAL conditions, see appendix II.) In addition to the contact named above, Rachel Frisk (Assistant Director), Kristen Jones (Analyst in Charge), Randy De Leon, and Michelle Loutoo Wilson made key contributions to this report. Additional contributors include Susan Aschoff, James Bennett, Sherwin Chapman, Alexander Galuten, Sheila McCoy, Monique Nasrallah, Monica Savoy, and Kelly Snow.
SSA in October 2008 implemented CAL to fast track individuals with certain conditions through the disability determination process by prioritizing their disability benefit claims. Since then, SSA has expanded its list of CAL conditions from 50 to 225. GAO was asked to review SSA's implementation of CAL. This report examines the extent to which SSA has procedures for (1) designating CAL conditions, (2) identifying claims for CAL processing, and (3) ensuring the accuracy and consistency of CAL decisions. GAO reviewed relevant federal laws, regulations, and guidance; analyzed SSA data on disability decisions for CAL claims from fiscal years 2009 through 2016 and on CAL claims with manual actions in fiscal year 2016; reviewed a nongeneralizable sample of 74 claim files with fiscal year 2016 initial determinations; and interviewed medical experts, patient advocates, and SSA officials in headquarters and six DDS offices selected for geographic dispersion and varied CAL caseloads. The Social Security Administration (SSA) does not have a formal or systematic approach for designating certain medical conditions for the Compassionate Allowance initiative (CAL). CAL was established in 2008 to fast track claimants through the disability determination process who are likely to be approved because they have certain eligible medical conditions. In lieu of a formal process for identifying conditions for the list of CAL conditions, SSA has in recent years relied on advocates for individuals with certain diseases and disorders to bring conditions to its attention. However, by relying on advocates, SSA may overlook disabling conditions for individuals who have no advocates, potentially resulting in individuals with these conditions not receiving expedited processing. Further, SSA does not have clear, consistent criteria for designating conditions for potential CAL inclusion, which is inconsistent with federal internal control standards. As a result, external stakeholders lack key information about how to recommend conditions for inclusion on the CAL list. To identify disability claims for expedited CAL processing, SSA primarily relies on software that searches for key words in claims. However, if claimants include incorrect or misspelled information in their claims the software is hindered in its ability to flag all claimants with CAL conditions or may flag claimants for CAL processing that should not be flagged. SSA has guidance for disability determination services (DDS) staff on how to manually correct errors made by the software, but the guidance does not address when such corrections should occur (see figure). Without clear guidance on when to make manual changes, DDS examiners may continue to take actions that are not timely and may hinder expedited processing for appropriate claims, and this can also impact the accurate tracking of CAL claims. SSA has taken some steps to ensure the accuracy and consistency of decisions on CAL claims, including developing detailed descriptions of CAL conditions, known as impairment summaries. These summaries help examiners make decisions about whether to allow or deny a claim. However, nearly one-third of the summaries are 5 or more years old. Experts and advocates that GAO spoke to suggested that summaries should be updated every 1 to 3 years. This leaves SSA at risk of making disability determinations using medically outdated information. In addition, GAO found that SSA does not leverage data it collects to assess the accuracy and consistency of CAL adjudication decisions. Without regular analyses of available data SSA is missing an opportunity to ensure the accuracy and consistency of CAL decision-making. GAO is making eight recommendations including that SSA develop a process to systematically gather information on potential CAL conditions, communicate criteria for designating CAL conditions, clarify guidance for manual corrections on CAL claims, update CAL impairment summaries, and use available data to ensure accurate, consistent decision-making. SSA agreed with GAO's recommendations.
The purpose of the Stafford Act is to provide an orderly and continuing means of assistance by the federal government to state and local governments in carrying out their responsibilities to alleviate the suffering and damage which results from disasters. The Stafford Act originally was enacted in 1974 and amended in 1988, 1993, and 2000. The Disaster Mitigation Act of 2000 established the IHP by combining two previous disaster grant programs - - the Temporary Housing Assistance and Individual Family Grant programs. Under the IHP, these programs were replaced by Housing Assistance and Other Needs Assistance. Looking specifically at the Housing Assistance component of the IHP, section 408 of the Stafford Act authorizes five types of assistance, of which four are relevant to disaster victims of Hurricanes Katrina and Rita: (1) Financial assistance to rent temporary housing. FEMA may provide financial assistance to individuals or households to rent alternative housing accommodations, existing rental units, manufactured housing, recreational vehicles, or other readily fabricated dwellings.(2) “Direct” temporary housing assistance. FEMA may provide temporary housing units (e.g., mobile homes and travel trailers), acquired by purchase or lease, directly to disaster victims, who, because of a lack of available housing resources, would be unable to make use of financial assistance to rent alternate housing accommodations. In other words, direct assistance would be available in situations where rental accommodations are not available. By statute, direct assistance is limited to an 18-month period, after which FEMA may charge fair market rent for the housing unless it extends the 18-month free-of-charge period due to extraordinary circumstances.(3) Repair assistance. Under this authority, FEMA may provide financial assistance for the repair of owner-occupied private residences, utilities, and residential infrastructure damaged by a major disaster. However, the maximum amount of repair assistance provided to a household is limited to $5,000, adjusted annually to reflect changes in the CPI.(4) Replacement assistance. This form of housing assistance authorizes funding to replace owner-occupied private residences. The amount of replacement assistance FEMA may provide to a household is limited to $10,000, adjusted annually to reflect changes in the CPI. For a victim to receive this assistance, there must have been at least $10,000 of damage to the dwelling. The victim may use the assistance toward replacement housing costs. As of September 25, 2006, proposed legislation was pending before Congress that would, among other things, eliminate the cap on home repair and replacement assistance. FEMA may provide ONA grant funding for public transportation expenses, medical and dental expenses, and funeral and burial expenses. ONA grant funding may also be available to replace personal property, repair and replace vehicles, and reimburse moving and storage expenses under certain circumstances. The maximum financial amount of housing and other needs assistance that an individual or household may receive is capped at $25,000, adjusted annually to reflect changes in the Consumer Price Index. Eligibility for IHP assistance is determined when an individual or household applies with FEMA and is based on the amount of property damage resulting from the disaster. For disaster victims with financial resources, SBA’s Disaster Loan Program is intended to be a primary resource available to aid in their recovery. FEMA refers disaster victims who apply for assistance and meet established income levels to SBA. Applicants who are denied loan assistance by SBA or have remaining unmet needs are sent back to FEMA for an assistance determination of their eligibility for certain types of ONA grant funding. (We reported on SBA’s efforts to provide disaster loans in response to the 2005 hurricanes in July 2006 and expect to issue another report on SBA’s response later this year.) Table 1 provides an overview of IHP benefits and identifies the ONA benefits that are subject to SBA disaster loan eligibility. FEMA manages the IHP primarily through a decentralized structure of permanent and temporary field offices staffed mostly by contract and temporary employees. The offices include permanent locations at the FEMA Recovery Division in FEMA Headquarters, regional offices, National Processing Service Centers, and temporary locations at Joint Field Offices, Area Field Offices, and Disaster Recovery Centers. Once the President declares a major disaster that is eligible for federal assistance, victims in declared counties must first apply for it with FEMA, by phone, over the Internet, or in person at a disaster recovery center. Figure 1 shows disaster victims waiting to speak with temporary disaster staff in October 2005 at a Disaster Recovery Center in St. Bernard Parish, Louisiana. Once a FEMA representative records personal information from a disaster application and provides the applicant with a FEMA application number, FEMA’s National Emergency Management Information System automatically determines potential eligibility for designated categories of assistance. FEMA refers disaster victims who apply for moving and storage, personal property repair or replacement, and/or vehicle repair or replacement related grant funding assistance and meet established income levels to SBA. Applicants who are denied loan assistance by SBA or have remaining unmet needs are sent back to FEMA for an assistance determination of their eligibility for certain types of ONA grant funding. To confirm that the home and personal property sustained damages as reported in a disaster assistance application, FEMA is to meet with disaster victims at their homes to conduct individual inspections to verify, ownership, occupancy, and damage. Figure 2 shows a FEMA inspection notice on a home in St. Bernard Parish damaged by Hurricane Katrina. Based on the results of the inspection and determinations made by staff at the National Processing Service Centers, FEMA approves or denies housing and/or other needs assistance. (Applicants may be eligible for either or both types of assistance.) If the applicant qualifies for a grant, FEMA sends the applicant a check by mail or deposits the grant funds in the applicant’s bank account. If an applicant is denied, he or she may appeal the decision by contacting a service center and providing additional information or clarification. Recipients of IHP assistance must recertify their continuing need for assistance every 30 to 90 days, depending on the type of assistance. Additional details about federal disaster assistance and IHP including the types of and eligibility for benefits, how the program is structured and implemented and the process for applying for and receiving program assistance are provided in appendix III. Because of the magnitude of the hurricanes and the extent of the resulting damage, the total number of applications for, and benefits provided through IHP in 2005 for Hurricanes Katrina and Rita far exceeded the combined total of the 2 years since the program was established in 2003. Two categories of assistance—temporary housing assistance and expedited assistance---- accounted for much of the significant increase in IHP expenditures for Hurricanes Katrina and Rita as compared to prior years. FEMA also provided a much greater amount of assistance for Hurricanes Katrina and Rita, than in prior years, for specific types of ONA benefits that are primarily provided only after applicants apply for and are denied an SBA disaster loan, indicating that the percentage of lower income applicants may have been a significant portion of total applicants. While the approval rate for housing assistance was greater than in previous years, the approval rate for ONA was notably lower for Hurricanes Katrina and Rita than the 2 previous hurricane seasons; 41 percent as compared to 65 percent in 2003 and 50 percent in 2004. Accordingly, the percentage of applicants FEMA identified as ineligible for housing assistance was lower while the percentage of ineligible applicants for ONA was higher for Hurricanes Katrina and Rita (44 percent) than for named hurricanes that came ashore in 2004 (31 percent). To establish a basis for eligibility, FEMA had to conduct a much greater number of inspections and accordingly, the related cost of those inspections were greater with Hurricanes Katrina and Rita than in 2003 and 2004 combined. Although FEMA referred more applicants to SBA for disaster loans for Hurricanes Katrina and Rita than in the prior 2 years, SBA returned about the same percentage of disaster loan applicants to FEMA for ONA consideration. FEMA received far more IHP applications, approved more requests for Housing and Other Needs Assistance, and awarded more grant money in 2005-2006 for Hurricanes Katrina and Rita than for all the hurricanes that resulted in a disaster declaration in 2004 (Ivan, Charley, Frances, and Jeanne) and 2003 (Isabel and Claudette) combined. Table 2 shows the number of applicants approved for both categories of IHP assistance and the grant award totals—as of August 2006, for Hurricanes Katrina and Rita and named hurricanes that came ashore in the United States in 2004. The table also shows the number of applications received by FEMA—as of September 2006. The number of applicants and both categories of IHP assistance for the 2003 named hurricanes were provided by FEMA as of April 2006. FEMA data as of August 2006, shows that two categories of assistance— temporary housing assistance and expedited assistance accounted for much of the significant increase in IHP expenditures for Hurricanes Katrina and Rita as compared to prior years, as shown in figure 3. FEMA specifically established a new transitional housing assistance allowance, as part of temporary housing assistance, to advance to Katrina disaster victims an amount equal to the initial 3 months of rental payments based on the national average rent for a 2-bedroom apartment. Expedited assistance is a pre-inspection disbursement of funds to disaster victims based on specific criteria such as the severity of the damage. (See glossary for definitions of all housing and other needs assistance categories.) Transitional housing assistance that was authorized exclusively for Hurricane Katrina, was estimated at about $1.3 billion while expedited assistance for both Hurricanes Katrina and Rita totaled an about $2.3 billion. By comparison, about $59 million was approved for hurricanes in 2004, while no expedited assistance was approved for hurricanes in 2003. In terms of ONA, figure 4 shows that FEMA provided a much greater amount of income dependent assistance for Hurricanes Katrina and Rita in 2005 than in prior years. Income dependent assistance requires that eligible applicants initially apply for and be denied assistance from the SBA Disaster Loan Program and includes expenses for personal property, moving and storage, and vehicle repair and replacement expenses. For Hurricanes Katrina and Rita, personal property assistance accounted for the majority of the income dependent assistance, about $1.8 billion. In comparison, for the hurricanes in 2003 and 2004, the combined total income-dependent assistance approved was less than $495 million. Lower income applicants may have made up a significant portion of those receiving ONA benefits because income dependent assistance in the form of personal property assistance was nearly 87 percent of the ONA approved for victims of hurricanes Katrina and Rita. As of August 2006, FEMA data shows that for Hurricanes Katrina and Rita nearly 2 million applicants applied for Housing Assistance while 1.3 million applicants requested ONA. About 67 percent of applicants for Housing Assistance were approved versus an estimated 41 percent of applicants approved for ONA. Although during Hurricanes Katrina and Rita more applicants were approved for ONA, the percentage of approved applicants was less than for hurricanes in the prior 2 years, whose approval rates were higher than 50 percent in each year. Accordingly, the percentage of applicants FEMA identified as ineligible for housing assistance was lower while the percentage of ineligible applicants for ONA was higher for Hurricanes Katrina and Rita (44 percent) than for named hurricanes that came ashore in 2004 (31 percent). Table 3 shows, by IHP assistance category, the number and percentage of applicants FEMA considered for IHP assistance as of August 2006 for hurricanes in 2004 and Hurricanes Katrina and Rita, and for hurricanes in 2003 as of April 2006. In addition, the table shows the number and percent of approved, ineligible, and pending IHP applicants. It also shows the number and percent of applicants that appealed FEMA decisions regarding their IHP assistance, for Hurricanes Katrina, Rita and named hurricanes that came ashore in 2003 and 2004. The table does not show the number of IHP applicants who withdrew their application during the evaluation process. In order to provide the unprecedented level of disaster assistance, FEMA had to significantly increase its number of home inspections. As of August 2006, data reported by FEMA indicates that after Hurricanes Katrina and Rita, about 1.9 million inspections were completed at a cost of approximately $179.6 million, or about $92 per inspection. For the hurricanes in 2003 and 2004, FEMA completed about 108,000 and 1.0 million inspections at a cost of about $8.0 million and $70.3 million or about $74 and $75 per inspection, respectively. In August 2006, FEMA reported the average time required for completing inspections—the time between the application for assistance until submission of an inspection report—after Hurricanes Katrina and Rita was about 33 days and 25 days respectively. The average time for completing inspections for the hurricanes in 2003 was 1 to 2 days and in 2004 the average was 4 to 5 days. A FEMA official stated that the goal for conducting inspections is a 3-day turnaround time. Figure 5 compares the number of inspections completed by contractors and the cost of the inspections for the named hurricanes in our review. According to a FEMA official, the following factors had an impact on the higher per inspection costs for Hurricanes Katrina and Rita: Both of FEMA’s inspection contractors had automatic annual increases on a per inspection basis built into their contract. Automatic annual increases from 2004 to 2005 for maintaining on-call availability were also included in the contracts. For 2005, FEMA added a new requirement for inspectors to photograph disaster damage that added to the cost per inspection. The contractors increased the per inspection cost in December 2005 when FEMA extended the contract beyond the initial 5-year period of performance. For Hurricanes Katrina and Rita, FEMA referred about 2.5 million applicants to SBA for assistance through its Disaster Loan Program. For hurricanes in 2003 and 2004, FEMA referred fewer applicants—about 107,000 and 1.3 million applicants respectively, to the SBA. As of August 2006, data reported by FEMA show that nearly 10 percent of applicants were sent back to FEMA from SBA for ONA consideration. In comparison, during hurricanes in 2003 and 2004, SBA sent back to FEMA a comparable percentage of applicants—about 12 percent and 10 percent respectively, which indicates that SBA’s loan denial rate was relatively consistent although more applicants were referred for Hurricanes Katrina and Rita than in the prior 2 years. Figure 6 shows the number of applicants referred to the SBA for loan assistance and the number of applicants the SBA sent back to FEMA for ONA in 2003, 2004, and for Hurricanes Katrina and Rita in 2005. Faced with unprecedented challenges in the aftermath of Hurricanes Katrina and Rita, FEMA devised new approaches and adapted pre-existing ones to administer the IHP. However, our work and six federal reports we reviewed pointed to ongoing management challenges which hindered IHP implementation. These management challenges included a lack of planning and trained staff, and programmatic restrictions on the uses of IHP funds that limited FEMA’s flexibility in using IHP assistance in the most efficient and effective manner. In May 2006, FEMA announced initiatives to address the problems and recommendations cited in the various reports. However, it is too early to assess the success of these initiatives. Hurricanes Katrina and Rita posed numerous unprecedented challenges for FEMA’s administration of the IHP. These challenges arose from the sheer number of victims seeking assistance, including many who had lost key financial, residential, and other documentation in the storms, and the dispersal of these victims throughout the United States. As a result, FEMA was also challenged to conduct an unprecedented number of housing inspections, often with limited or no access to individuals or, in many cases, to the affected homes. To provide benefits quickly to eligible victims, communicate with about 2 million applicants scattered across the country and conduct inspections, FEMA developed a number of new approaches, as summarized in table 4. In addition, FEMA adapted several of its traditional approaches to respond to Hurricanes Katrina and Rita, according to FEMA, as summarized in table 5. Each of the assessments of the federal government’s response to Hurricanes Katrina and Rita we reviewed identified problems in FEMA’s implementation of IHP during and after the storms. Our review and our assessment of these reports showed that the agency’s efforts to implement the IHP were hindered by a lack of planning, trained staff, and program limitations, despite its new and revised approaches for implementing the program. A list of these assessments is provided in table 7. In addition, a summary of Katrina- and Rita-issues related to the IHP addressed in these reports is identified in appendix IV. Regarding planning, the DHS Inspector General reported in March 2006 that FEMA lacked final plans that specifically addressed the types of challenges the agency could be expected to face in catastrophic circumstances. For example, because FEMA was unable to immediately implement IHP assistance to provide funds to transition victims from short-term lodging, including shelters, hotels and motels to longer-term housing alternatives such as mobile homes or apartments, FEMA officials used Public Assistance funds. Normally, public assistance is provided (under section 403 of the Stafford Act) only for immediate emergency sheltering efforts to get assistance to individuals and households quickly. Under normal circumstances, IHP funds provided under Section 408 of the Act are intended to accommodate the longer-term housing needs of evacuees up to 18 months. FEMA officials said that many applicants would have waited months to receive their initial assistance if FEMA had followed normal IHP processes and procedures under Section 408 and had to wait until inspections were completed and IHP information and assistance could be communicated to disaster victims who were dispersed to all 50 states. However, this use of Public Assistance funds was problematic, according to the DHS Inspector General’s report. Because application for assistance is not a requirement for the provision of Public Assistance under section 403 of the Stafford Act, FEMA did not know whether disaster victims were actually eligible for assistance as a direct result of the disaster. This increased the potential for duplication with other assistance programs since there was no internal mechanism to determine whether an evacuee had received assistance from the IHP when interim housing may have already been provided. The interim housing assistance funded under section 403 was only phased out after FEMA was able to identify that an evacuee had received IHP funds. FEMA was aware it needed to plan for large disasters but had problems getting necessary funding, according to the Senate Homeland Security and Governmental Affairs Committee’s Katrina Report. FEMA requests for $100 million for catastrophic planning and an additional $20 million for catastrophic housing planning in fiscal year 2004 and fiscal year 2005, respectively, were denied by DHS. Our review of FEMA’s implementation of IHP showed that FEMA’s reactive approach to planning and implementing the IHP on a disaster-by-disaster basis is inadequate to deal with the short-term and long-term needs of affected communities, particularly for catastrophic disasters when the agency’s resources and staff are strained. For example, FEMA failed to pre-identify workable sites and land and take advantage of available housing units from other federal agencies, according to a February 2006 White House report. We have ongoing work focusing on the federal role in providing housing assistance in response to Hurricanes Katrina and Rita. In terms of trained staff, FEMA lacked the surge capacity to effectively manage the disaster assistance process. Specifically, according to the March 2006 DHS Inspector General report, additional trained staff were needed to (1) provide initial application services at Disaster Recovery and Call/Processing Centers, (2) process applications and respond to questions at the National Processing Service Centers, and (3) conduct inspections. First, according to the DHS Inspector General, disaster victims experienced delays when they contacted Call Centers or were not able to speak with anyone. Second, disaster victims experienced delays in obtaining their eligibility determination, according to FEMA officials responsible for managing the IHP. Third, inspections were delayed, in part, because FEMA lacked enough contract inspectors to perform inspections, according to FEMA. Our analysis found, for example, that inspection times for Katrina and Rita took an average of two to five times longer compared to named hurricanes in 2004. FEMA uses inspectors that have a construction, real estate, or appraisal background, but it is not required, according to a FEMA Inspection Services Manager. FEMA requires that each inspector be trained on FEMA standards and policies regarding program eligibility and that new inspectors undergo background checks. In most conventional disasters, experienced inspectors are to accompany new inspectors in the field to ensure that they are meeting FEMA standards before they are allowed to complete inspections on their own. We have work underway assessing trends in FEMA’s resources, including staffing, and their impact on FEMA’s capacity to conduct operations and plan to report on FEMA’s workforce management efforts later this year. According to the March 2006 DHS Inspector General report, FEMA was not able to dedicate its full staffing strength to Hurricane Katrina for three primary reasons. First, at the time of the disaster, FEMA had personnel assigned to 38 other disasters not related to Hurricane Katrina. For example, Hurricane Ophelia in the Carolinas, Hurricane Rita in the Gulf Coast region, and flooding in the Northeast were declared disasters and required FEMA resources. Second, an average of 30 percent of FEMA Disaster Assistance Employees reported they were unavailable to respond to Katrina or any other disaster during the August 24, 2005 – September 30, 2005 time frame. (Disaster Assistance Employees may be unavailable for such issues as health or family concerns.) Third, FEMA officials said, although FEMA was authorized 2,445 staff in August 2005, 389 positions were vacant and many of these were key leadership positions. The DHS Inspector’s report included recommendations that FEMA (1) develop a more comprehensive program to recruit, train, and retain local hires for use in augmenting FEMA’s Disaster Assistance Employees and permanent staff, (2) provide training to additional NPSC staff and contractors to enhance FEMA’s capability to perform evacuee assistance and case management activities, and (3) develop a disaster workforce plan for permanent, temporary, and reserve staff that is scalable events regardless of cause, size, or complexity. FEMA concurred with the recommendations. Throughout our review FEMA officials cited their concerns regarding the lack of agency and contractor staffing resources needed to effectively implement the program during a catastrophic event. Concerns regarding training and staffing for disaster response management are long-standing. In 2003, in our report on major performance and accountability challenges for FEMA, we noted that FEMA faced challenges to enhance its disaster assistance training and resource planning. According to the report, FEMA developed a program in 1999 for evaluating the knowledge, skills, and abilities of its staff—both permanent and temporary—who are deployed to respond to a disaster. FEMA expected the program would ensure its employees would have basic qualifications to perform their jobs, but, according to FEMA officials, the program was not implemented because of budget constraints. We also reported that 48 percent of FEMA’s workforce would be eligible to retire in the next 5 years and this would pose a challenge for having staff with the skills needed to perform core functions. Finally, FEMA officials cited legislative and regulatory limitations that restricted FEMA’s flexibility in implementing the IHP in the aftermath of Hurricane Katrina. For example, FEMA’s Federal Coordinating Officer for Louisiana cited the statutory program’s maximum of $5,000 for home repair as one limitation, noting that if the $5,000 is not sufficient to fix the home, then FEMA may have to provide a trailer for temporary housing. He testified that manufactured housing is not cost-effective and can cost up to $90,000 to $100,000 per mobile home for a group site (including total costs for site preparation, hauling and installation, and cost of home). He suggested that in some situations if FEMA were able to give disaster victims the maximum amount of IHP financial assistance, it would be more cost-effective because it would allow many of these families to find permanent housing. However, the Acting Deputy Director for FEMA’s Recovery Division told us that FEMA only uses manufactured housing as a last resort, and in the post-Katrina and Rita environment, housing and the infrastructure that supports the community was destroyed. As a result, FEMA did not have any alternative other than to provide manufactured housing. FEMA officials were unable to use a large supply of federally controlled housing units that could have been made available for occupancy by disaster victims with only minor repairs because reimbursement for repairs to existing available housing units are not authorized under the current program regulations, according to the White House report on Hurricane Katrina. As a result, FEMA had to provide alternative temporary housing such as trailers and other manufactured housing units, at considerably greater cost, while leaving other potentially available housing vacant. A bill, the Natural Disaster Housing Reform Act of 2006, was introduced May 16, 2006, in the House of Representatives that would provide the federal government with more flexibility in the provision of short- and long-term housing after a major disaster. For example, the bill would allow the President to offer disaster victims manufactured modular housing under the IHP if it could be provided at a lower cost than other readily fabricated dwellings. It would also extend repair assistance under the IHP, currently available only for owner-occupied residences, so that renters could repair existing rental units to make them habitable as alternate housing accommodations. The bill also proposes that the President may provide financial assistance or direct assistance to individuals or households to construct permanent or semi-permanent housing in any area in which the President declared a major disaster or emergency in connection with Hurricane Katrina of 2005 during the period beginning on August 28, 2005, and ending on December 31, 2007. Under the IHP, permanent housing construction is only available for disaster victims who reside in insular areas or other remote locations. In an effort to address the problems and recommendations cited in the various reports, FEMA announced plans on May 24, 2006, to implement a number of new approaches to enhance logistics, emergency communications, situational awareness, housing and victim management. According to FEMA, the improvements related to IHP include plans to increase the number of trained staff and revise new policies and procedures. However, at the time of our review, many of these initiatives were in the planning or at the early implementation stage. As a result, it was too early to assess their potential impact on future program implementation. Specifically, FEMA reported plans to: Hire a training coordinator to develop a more comprehensive training program to prepare existing and new personnel for Disaster Recovery Center assignments. According to FEMA’s Acting Deputy Director for the Recovery Division, they were still searching for qualified applicants for the training coordinator position as of August 2006. Train 3,000 disaster “generalist” surge cadre employees for ready deployment during the height of the 2006 hurricane season and increase its capacity to deploy and communicate with the increased number of disaster employees. According to FEMA, these surge employees are to form a “generalist” pool of disaster workers and be trained in a number of basic functions cutting across traditional program areas including Community Relations, Individual Assistance, Public Assistance and Logistics. As of August 2006, FEMA said approximately 1,836 employees had completed the training. Develop greater contract and contingency surge capabilities to expand application intake capacity of up to 200,000 per day (during the weeks following Hurricanes Katrina and Rita, FEMA recorded more than 100,000 applications a day) and expand its Internet-based application capability by improving accessibility to reduce application wait times and FEMA Helpline information delays following a major disaster. According to FEMA officials, the objective of expanding its capabilities is to have private-sector contracts in place and resources ready to handle calls within 48 hours of a disaster declaration. In the past, FEMA had to augment its application intake surge capabilities each hurricane season especially during 2004 and 2005 a step usually taken under urgent and compelling needs, through emergency contracts, and by using Internal Revenue Service personnel. FEMA plans to award the contract for this initiative in 2007 and, in the interim period, plans to continue to utilize IRS personnel and redirect existing FEMA staff to augment application intake capabilities. Implement a pilot program in the 2006 hurricane season to use Mobile Registration Intake Centers that can be deployed to emergency shelter locations or impacted neighborhoods without power or phone service and provide on-site capability to quickly apply for FEMA assistance. These units would be capable of providing the public access to the FEMA disaster assistance program via phone and the internet. FEMA currently has five vehicles each equipped with 20 telephones and 20 personal computers. As of August 2006, FEMA was in the planning stage of upgrading each vehicle’s capacity to support 40 telephones and 40 personal computers and has the ability to expand this effort by using tents with tables and equipment set up near the vehicles. FEMA’s intention is to evaluate the pilot program at the end of the 2006 hurricane season to determine if they should expand this capability. Increase contractor staffing capacity for housing inspections from 7,500/per day/per contractor to 20,000/per day/per contractor. FEMA anticipates that this added capacity will increase the speed and accuracy of home inspections. FEMA intends to implement the related requirements with the award of its new inspection contracts tentatively scheduled for the end of December 2006. Clarify program policies on the appropriate use and authorization of emergency sheltering funds (Stafford Act, section 403 assistance) and individual housing assistance funds (Stafford Act, section 408 assistance) for the disaster victims. As part of this initiative, FEMA plans to have a draft policy in place for issuing authorization codes to evacuees for lodging and hotels for the 2006 hurricane season. In addition, FEMA plans to have a policy for Expedited Assistance that defines the conditions that must be met before initiating the program. FEMA issued a strategy for mass sheltering and housing assistance on July 24, 2006, and plans to develop more detailed policies and procedures to implement the strategy. As we recently reported, one of the major challenges FEMA faced after Hurricanes Katrina and Rita was balancing the need to quickly deliver benefits and services to needy and eligible victims while minimizing occurrences of fraud and abuse. As we testified in June 2006, an estimated 16 percent, or approximately $1 billion, in FEMA IHP payments were improper and potentially fraudulent due to invalid application data. (A copy of our testimony is provided in app. IV.) Additionally, we found that FEMA made improper or potentially fraudulent IHP payments to applications containing names and Social Security Numbers of individuals who were incarcerated at the time of disaster, and paid hotel room charges for applicants who were also receiving rental assistance concurrently. We also determined that FEMA had little accountability over debit card distribution and lacked proper controls over debit card usage. An estimated 16 percent of payments totaling approximately $1 billion were improper and potentially fraudulent due to invalid applications. The 95-percent confidence interval surrounding the estimate of 16 percent ranges from 12 percent to 21 percent. The 95-percent confidence interval surrounding the estimate of $1 billion ranges from $600 million to $1.4 billion. The estimated amount included payments for expedited assistance, rental assistance, housing and personal property repair and replacement, and other necessary and emergency expenses. These payments were made to (1) applications containing Social Security Numbers (SSN) that were never issued or belonged to other individuals, (2) applicants who used bogus damaged addresses, and (3) applicants who had never lived at the declared damaged addresses or did not live at the declared damaged address at the time of disaster. These payments were also made to applications containing information that was duplicative of other applications already recorded in FEMA’s system. The duplicative payment failures refer to instances where FEMA made payments to more than one application with the same damaged property and current addresses, and the payment selected was associated with the second or later application. For example, one applicant submitted an application for the same current and damaged address that was used on another application, and both received payments for $2,358 of rental assistance on each application in September 2005. Effective preventive controls for duplicate applications would have detected that the two applications shared the same damaged and current address and acted to prevent the duplicate payments. Our projection likely understated the total amount of improper and potentially fraudulent payments because our work was limited to issues related to misuse and abuse of identity, damaged property address information, and duplicate payments. Our estimate did not account for improper and potentially fraudulent payments related to issues such as identity theft, and whether the applicants received rental assistance they were not entitled to, received housing and other assistance while incurring no damage to their property, and/or received FEMA assistance for the same damages already settled through insurance claims. Our forensic audit and investigative work found that improper and potentially fraudulent payments occurred mainly because FEMA did not validate the identity of all applicants, the physical location of the declared damaged address, and ownership and occupancy of all applicants at the time of application. For example, in one case an applicant received $7,328 for expedited and rental assistance even though the applicant had moved out of the house a month prior to Hurricane Katrina. Examples of other improper and potentially fraudulent payments included a FEMA payment of $2,000 to an individual who provided a damaged address that did not exist, and payment of $2,358 in rental assistance to another individual who claimed his damaged property was inside a cemetery. We also found that FEMA made approximately $5.3 million in payments to applicants who provided a post office box address as their damaged residence. For example, FEMA paid an applicant $2,748 who had listed a post office box in Alabama as the damaged property. Follow-up work with local postal officials revealed that the post office box listed on the application had been used by individuals linked to other potential fraud schemes. Our undercover work provided further evidence of the weaknesses in FEMA’s management of the disaster assistance process. For example, FEMA provided nearly $6,000 in rental assistance to one of GAO’s undercover applicants who had applications that declared a bogus property as the damaged address. These payments continued to be provided even though verification with third-party records indicated that the GAO undercover applicant did not live at the damaged address, and after the Small Business Administration had reported that the damaged property could not be found. In another example, a FEMA inspector assigned to inspect a bogus property was not able to find the house despite numerous attempts to verify the address through the phone book, the post office, and a physical inspection. Nevertheless, in early 2006 FEMA provided GAO a check for $2,000 for presumed losses sustained by this property. Without verifying the identity and primary residence of applicants prior to IHP payments, it is not surprising that FEMA also made expedited and rental assistance payments totaling millions of dollars to over 1,000 applications containing information belonging to prison inmates. In other words, payments were made to applications using the names and SSNs of individuals who were not displaced as a result of the hurricanes, but rather were incarcerated at state prisons of the Gulf Coast states (that is, Louisiana, Texas, Florida, Georgia, Mississippi, and Alabama), or in federal prisons across the United States when the hurricanes hit the Gulf Coast. For example, FEMA paid over $20,000 to an inmate who had used a post office box as his damaged property. Our data mining work also found potentially wasteful and improper rental assistance payments to individuals who were staying at hotels paid for by FEMA. In essence, the government paid twice for these individuals’ lodging—first by providing a hotel at no cost and, second, by making payments to reimburse these individuals for out-of-pocket rent. For example, FEMA paid an individual $2,358 in rental assistance, while at the same time paying about $8,000 for the same individual to stay 70 nights— at more than $100 per night—in a hotel in Hawaii. In this particular case, the duplicate payments were not only wasteful, but they were improper because the applicant did not live at the damaged property at the time of the hurricane. Another applicant stayed more than 5 months—at a cost of $8,000—in hotels paid for by FEMA in California, while also receiving three rental assistance payments for the two separate disasters totaling more than $6,700. These instances occurred because FEMA did not require hotels to collect FEMA application numbers and SSNs from residents staying in FEMA-paid for rooms. Without this information, FEMA could not verify if the applicants were staying in government provided hotels before sending them rental assistance. Without the ability to identify all IHP applicants who had already received hotel lodging, FEMA provided duplicate housing benefits to a number of applicants. Because the hotels and FEMA did not collect application identification numbers, we were unable to quantify the magnitude of individuals who received these duplicate benefits. However, the tens of thousands of dollars that were wasted in the previous examples are illustrative of the wasteful spending we found through data mining. Finally, we found that FEMA did not institute adequate controls to ensure accountability over the debit cards. Specifically, FEMA initially paid $1.5 million for over 750 debit cards that the government could not determine actually went to help disaster victims. Based on our numerous inquiries, upon identification of several hundred undistributed cards J.P. Morgan Chase refunded FEMA $770,000 attributable to the undistributed cards. Further, we continued to find that debit cards were used for items or services such as a Caribbean vacation, professional football tickets, and adult entertainment, which do not appear to be necessary to satisfy disaster-related needs as defined by FEMA regulations. In commenting on our draft report, FEMA partially concurred with our recommendation to increase accountability over debit cards, acknowledging the challenges inherent in the use of debit cards and stating that the agency has no current plans to use debit cards. FEMA said the agency will continue to evaluate the report’s recommendations to determine whether any further use may be warranted. Fraud and error in this program is not new and FEMA has struggled for some time with the issue of balancing expeditious assistance with minimizing fraud and improper payments. For example, FEMA’s and later DHS Office of Inspector General reported problems with the FEMA’s previous disaster assistance program —the Individual and Family Grants program—in 2001 and 2004. These previous reports identified problems related to a lack of inspections to verify property damage, relaxed requirements to document whether an applicant was eligible for advance payment of a grant, increasing the likelihood of fraud for the program. More recently, in May 2005, DHS’s Office of Inspector General reported shortcomings in FEMA’s administration of IHP and its oversight of inspections in response to Hurricane Francis. For example, FEMA designated a county eligible for Individual Assistance programs without a proper preliminary damage assessment and FEMA’s contractors were not required to review inspections prior to submission. Katrina and Rita were two of the most intense hurricanes ever recorded during the Atlantic hurricane season. The widespread devastation they wrought presented unprecedented challenges to all levels of government and voluntary organizations to help the hundreds of thousands of victims evacuate, relocate, and get food, shelter, medical care, and other assistance. As we and others have reported, the unprecedented geographic scope of the damage, the number of victims who had to be relocated, and the extent of the devastation clearly overwhelmed both government and nongovernment relief agencies, resulting in widespread dissatisfaction with the effectiveness of the preparation and response to the disaster. FEMA’s processes and systems for registering hurricane victims for assistance, determining eligibility for IHP assistance, and managing the IHP were simply overwhelmed, and FEMA was unable to effectively manage the enormous challenge that the disasters posed for the IHP. GAO’s audit and that of others found a number of problems with the program, including a lack of appropriately trained personnel that limited FEMA’s effective surge capacity, an inability to effectively identify ineligible and duplicate applications, and consequently the payment of millions of dollars of assistance to ineligible persons. GAO’s audit and investigative work found that FEMA did not have an effective fraud prevention program in place prior to the landfall of Hurricanes Katrina and Rita. The consequences were that tens of thousands of individuals received an estimated $600 million to $1.4 billion in potential improper or fraudulent payments through February 2006. The actual amount may be higher because our work excluded such issues as identify theft, insurance fraud, and individuals who had no uninsured losses who may have received benefits. In any major disaster FEMA faces the demand to get assistance to eligible victims, many of whom may have lost everything, expeditiously while also ensuring that assistance does not go to those who are ineligible. FEMA recognizes that the problems it encountered in managing the IHP in the wake of Hurricanes Katrina and Rita need to be addressed and has announced several initiatives to address those problems. The effect of those efforts cannot yet be determined, and not all of them were scheduled to be in effect for the 2006 hurricane season. We believe it is possible to have effective fraud prevention controls in place while also getting money to eligible victims quickly. Such controls are far more effective in ensuring that IHP funds are used properly than efforts to recoup funds paid to those who were ineligible for assistance. Recoupment actions are expensive and may recover only pennies on the dollar because the assistance has already been spent. We recommend that the Secretary of the Department of Homeland Security (DHS) direct the Director of FEMA to take the following actions to address the improper and potentially fraudulent payments within the IHP based on the findings in our testimony of June 14, 2006. Many of the recommendations below are preventive and thus, are intended for the 2007 hurricane season and other future disasters that include IHP assistance payments. However, whenever appropriate, we have identified recommendations we believe should also be implemented for the remaining aspects of assistance for Hurricanes Katrina and Rita. For all recommendations below, FEMA should fully field test all changes to provide assurance that all valid applicants are able to apply for and receive IHP payments. Also, for all recommendations, FEMA must ensure that there are adequate manual processes in place to allow applicants who are incorrectly denied assistance to appeal the decision and receive aid. In addition, we are reemphasizing the importance of implementing the six recommendations we made previously in our June report. The recommendations in this report are designed to prevent further payments from being made on improper and potentially fraudulent Katrina and Rita applications, to the extent possible recoup Katrina and Rita payments already identified as fraudulent and improper, and address weaknesses so that, in future disasters, FEMA can identify fraudulent and improper applications prior to making payments. To obtain reasonable assurance that applicants are prevented from receiving assistance based on invalid damaged addresses, we recommend that the Secretary of Homeland Security direct the Director of FEMA to take the following three actions: Implement changes to its systems and processes to reject damaged addresses that are PO boxes. Provide applicants immediate feedback that PO boxes are not valid damaged addresses. Implement a process to identify damaged addresses that are not primary residences, such as commercial mail drops. To provide reasonable assurance that payments are only made based on a valid damaged address that was the applicant’s primary residence, we recommend that the Directors of DHS and FEMA take the following two actions: Include, in the design of the address verification process recommended in our prior report, procedures to validate that the address an applicant claimed as damaged was the applicant’s primary residence at the time of the disaster. Develop and implement processes and procedures to deal with applications where FEMA or other inspectors have concluded that the damaged address was bogus. Within this process, FEMA should Develop timely information sharing procedures between inspectors working for FEMA and other agencies to provide assurance that applicants who submitted damaged addresses that inspectors identified as bogus are not provided disaster assistance. To prevent and/or detect prisoners from improperly receiving IHP payments in the future we recommend that the Director of FEMA explore information sharing agreements with federal and state officials in charge of maintaining custody over prisoners that could be used to identify ineligible applications. To reduce duplicate payments, we recommend that FEMA Expand the data fields used in the duplicate detection process at the time of application to restrict applications to one per eligible household, unless warranted by other circumstances, such as households displaced to separate locations. To prevent concurrent payments for lodging (i.e., rental assistance, hotels, etc.) for which FEMA is financially responsible, we recommend that the Director of FEMA take the following two actions: Establish procedures requiring that individuals apply with FEMA prior to receiving no cost disaster lodging accommodations from federal agencies or the Red Cross. Develop procedures to provide reasonable assurance that individuals staying in FEMA or other no cost lodging are not also provided IHP rental assistance payments for the time they are in the paid for hotel rooms. To increase accountability over debit cards, we recommend that the Director of FEMA take the following three actions: Finalize a full reconciliation to link each issued Katrina debit card recorded by the bank (JP Morgan Chase) to a specific IHP application, Require that the bank refunds the government for any unaccounted for funds related to distribution of Katrina-related debit cards, and Augment procedures for future disasters to provide reasonable assurance that accountability over debit card distribution occurs at each custody transfer in the distribution process. To identify and recoup payments based on improper and potentially fraudulent Katrina and Rita applications, we recommend that the Director of FEMA develop a comprehensive strategy—for current and future disasters—to identify the types of improper applications discussed in this report and refer them for either collections or additional investigations. On September 18, 2006, FEMA provided written comments on a draft of this report (see appendix II). FEMA fully concurred with 9 of 13 recommendations, and substantially concurred with the remaining 4 recommendations. However, FEMA disagreed with our estimate of fraudulent and improper payments. FEMA noted that our estimate of 16 percent was substantially higher than their historical estimate of 1 to 3 percent. However, FEMA’s reported fraud rate of 1 to 3 percent is not based on an independent, comprehensive statistical sample of the entire population of individual assistance payments; instead, the 1 to 3 percent FEMA estimate is simply the amount of overpayments that it identifies based on its own internal processes and procedures. FEMA fully agreed with 9 of the 13 recommendations, and stated that it had taken or plans to take actions to specifically respond to these 9 recommendations. While we did not evaluate the extent to which the implementation of these changes would address the weaknesses we identified with FEMA’s oversight of IHP payments, if they are properly implemented the changes should address our concerns. FEMA also partially concurred with four recommendations related to debit cards and hotel accommodations. Regarding our 3 recommendations on debit cards, FEMA stated that the agency has no current plans to use debit cards and will continue to evaluate the report’s recommendations to determine whether any further use may be warranted. In response to our recommendation that FEMA establish procedures requiring that individuals apply with FEMA prior to receiving no cost disaster lodgings accommodations from federal agencies or the Red Cross, FEMA stated that the agency has implemented a protocol to ensure that disaster victims register and obtain an authorization code as a prerequisite for the use of hotels/motels as transition shelters. While FEMA cannot impose this protocol on the Red Cross, FEMA stated that it planned to affirm eligibility prior to reimbursing the Red Cross. Our objective in making this recommendation is to prevent duplicate housing benefits from being provided to registrants. Thus, if FEMA’s new process affirms the eligibility of registrants prior to reimbursing Red Cross, FEMA’s processes would address the objective of this recommendation. While FEMA substantially agreed with our recommendations, it disagreed with the methodology we used to conduct our work, which formed the basis for many of the 13 recommendations. Specifically, in light of FEMA’s repeated representations that 1 to 3 percent of its IHP payments are fraudulent or improper, FEMA took exception to our estimate that 10 to 22 percent of the payments were based on registrations containing fraudulent or inaccurate information. However, it is important to note that FEMA’s estimate of 1 to 3 percent fraud is not based on an independent, comprehensive statistical sample of the entire population of individual assistance payments; instead, it is based on the historical amount of IHP payments that FEMA places in its internal recoupment process, which includes overpayments identified through case reviews, system checks, and hotline tips. FEMA officials have acknowledged that their estimate is not based on an in-depth statistical analysis for eligibility or any other type of fraud. Further, our estimate is likely understated because it only focused on payments made to invalid registrations. Our estimate excluded substantial potential fraudulent and improper payments caused by such actions as identity theft, insurance fraud, duplicate government payments for lodging, or payments without evidence of property damage. In responding to our draft report, FEMA also commingled the results of our statistical sampling with other findings of fraudulent and improper payments that were not included in our estimate. For example, the reported fraudulent and improper payments related to individuals who stayed at FEMA-paid hotels and received rental assistance payments were not included in our statistical sample and resultant estimate of 16 percent of fraudulent and improper payments. FEMA also questioned whether some payments we categorized in our statistical sample results as potentially fraudulent and improper, such as those relating to separated households, were in fact valid payments. Specifically, FEMA stated that without a “knowledgeable” case by case analysis, our estimate was not accurate. We disagree. We were aware of FEMA’s separated household’s policy and did not count any payments as duplicates if they related to families that were displaced to different locations. In addition, for our statistical sample we performed a detailed case by case analysis on sample items that included using all available audit and investigative tools, background information, and NEMIS data to ensure conclusions reached were accurate. For example, we visited damaged addresses and spoke with IHP applicants, landlords, neighbors, and postal officials. FEMA also stated that it has been unable to validate our results because we had not provided evidence related to our estimate for their review. We have not provided details of our sample failures to FEMA because the cases of fraudulent and improper payments are in the process of being referred to the Katrina Fraud Task Force for investigation and potential prosecution, as has been the standard process for other fraud cases identified though data mining. Based on agreements with the Katrina Fraud Task Force, which includes the Department of Homeland Security Inspector General, all fraud cases will continue to be referred directly to the Katrina Fraud Task Force to ensure investigations and prosecutions are not jeopardized. FEMA also raised concerns with the registrants we reported who had received duplicate lodging assistance. FEMA commented that such a determination can only be made after a knowledgeable case by case analysis determined the appropriateness of payments. Our methodology used to identify data mined examples of the duplicate lodging payments consisted of comparing hotel receipt information and FEMA’s own payment data to confirm that the subject received multiple rental assistance payments at the same time FEMA paid for their hotel room. We are sending copies of this report to the Secretary of the Department of Homeland Security, and the Director of Federal Emergency Management Agency. We will make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. Please contact either William Jenkins at (202) 512- 8757 or jenkinswo@gao.gov or Greg Kutz at (202) 512-7455 or kutzg@gao.gov if you or your staffs have any questions concerning this report. Key contributors to this report are listed in appendix VI. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. To evaluate the Federal Emergency Management Agency’s (FEMA) disaster assistance provided in response to Hurricanes Katrina and Rita through the Individuals and Households Program (IHP), we assessed (1) how the types and amounts of assistance provided to victims of Hurricanes Katrina and Rita compare to other recent hurricanes, (2) the challenges posed by the magnitude of the requests for assistance following Hurricanes Katrina and Rita, and FEMA’s response to these challenges, and (3) the vulnerability of the IHP to fraud and abuse and management issues in the wake of Hurricanes Katrina and Rita and FEMA’s reported actions to address any identified problems. To describe the type and amount of IHP assistance FEMA provided for Hurricanes Katrina and Rita in comparison to assistance provided in other hurricane disasters, we interviewed agency officials. We obtained and analyzed data provided by officials from FEMA’s National Processing Service Center in Winchester, Virginia, and compared IHP disaster assistance provided under Hurricanes Katrina and Rita to assistance provided after other hurricane-related disaster declarations occurring in calendar years 2003 through 2005, to the extent information was available from FEMA’s National Processing Service Center’s National Emergency Management Information System. (FEMA provided data for IHP benefits paid as of August 2006, and for IHP applications received as of September 2006 for both the named hurricanes that came ashore in 2004 and Hurricanes Katrina and Rita. The 2003 named hurricane data was provided by FEMA as of April 2006. A FEMA official told us that changes to the data for named hurricanes in 2003 from April to August 2006 would be minor enough to prove statistically insignificant.) We selected these hurricanes for comparison because they constituted a cross section of disaster declarations that (1) occurred within the period in which IHP was implemented, and (2) represented hurricane disaster declarations that occurred in a single state and those that occurred in multiple states simultaneously. We assessed the accuracy and reliability of the system by interviewing agency officials knowledgeable about the data system and by obtaining from the agency written responses regarding (1) the agency’s methods of data collection and quality control reviews, (2) practices and controls over data entry accuracy, and (3) any limitations of the data. We determined that the data were sufficiently reliable for the purposes of our engagement. To determine the programmatic challenges FEMA faced during Hurricanes Katrina and Rita and agency efforts to address those challenges, we interviewed FEMA headquarters officials from the Recovery Division and staff from the agency’s Individual Assistance and Public Assistance Branches, FEMA staff from the National Processing Service Center and contract Inspection Services located in Virginia, and Joint Field Office officials in New Orleans, Louisiana. We observed contract inspectors assessing damaged residential properties in New Orleans. We also reviewed and analyzed federal legislation and regulations that are applicable to FEMA disaster assistance programs prior to and after the implementation of IHP and relevant FEMA policies, guidance, and processes. We reviewed and analyzed the agency’s IHP budget, staffing, and performance measures. We also reviewed prior audit reports and assessments related to FEMA’s implementation of the IHP. To assess the vulnerability of the IHP to fraud and abuse and management issues in the wake of Hurricanes Katrina and Rita and FEMA’s reported actions to address any identified problems, we estimated the number of improper and potentially fraudulent payments based on statistical sampling of payments to examine whether the associated applications contained invalid Social Security Numbers (SSN), bogus addresses, invalid primary residence, and duplicate information with another application. Invalid SSNs refer to instances where the SSNs did not match with the name provided; the SSNs belonged to deceased individuals; or the SSNs had never been issued. Bogus addresses refer to instances where audit and investigative work we performed indicate that the damaged address did not exist. Invalid primary residences are related to applications where the applicant had never lived at the damaged address, or did not live at the damaged address at the time of the hurricanes. Duplicate information refers to instances where the applications contained information that is duplicative of another application that received a payment and was earlier recorded in FEMA’s system. Because we followed a probability procedure based on random selections, our sample is only one of a large number of samples that we might have drawn. Since each sample could have provided different estimates, we express our confidence in the precision of our particular sample’s results as a 95-percent confidence interval (e.g., plus or minus 5 percentage points). This is the interval that would contain the actual population value for 95-percent of the samples we could have drawn. As a result, we are 95-percent confident that each of the confidence intervals in this report will include the true values in the study population. Also, the 16 percent of payments that was improper and potentially fraudulent excluded payments that were returned to the U.S. government by the time of our review. We also reviewed IHP processes and procedures for determining applicant eligibility for specific types of IHP assistance and analyzed prior audit reports and assessments. We also obtained information from FEMA’s Acting Deputy Director of Recovery on the status of FEMA’s efforts to address the problems identified. Because we have not tested all aspects of potential fraud, waste and abuse related to the IHP, the recommendations in this and our prior report do not represent a comprehensive fraud prevention program. We conducted our audit work between January 2006 and September 2006 in accordance with generally accepted government auditing standards. We conducted our investigative work between October 2005 and September 2006 in accordance with the standards prescribed by the President’s Council on Integrity and Efficiency. Federal assistance takes many forms—including the direct provision of goods and services, financial assistance (through insurance, grants, loans, and direct payments), and technical assistance—and can come from various sources. The Individuals and Households Program (IHP) is one of these individual assistance programs funded through the Stafford Act’s Disaster Relief Fund, as illustrated in the conceptual framework for federal disaster assistance in the figure 7. Congress may provide funding for federal disaster assistance to specific agencies for areas in which they retain expertise. For example, the Department of Housing and Urban Development administers funds for economic redevelopment and infrastructure restoration, the Department of Transportation provides assistance for road restoration, and other agencies provide assistance for activities such as providing small businesses disaster assistance loans and public health or medical services that may be needed in the affected area. With respect to Stafford Act activities, FEMA administers the Disaster Relief Fund, which provides for three major categories of aid under the Stafford Act—assistance to state and local governments through public and hazard mitigation assistance programs and assistance to individuals and households. FEMA’s Public Assistance program provides grants to eligible state and local governments and specific types of private nonprofit organizations that provide services of a governmental nature, such as fire departments, emergency and medical facilities, and educational institutions, to help cover the costs of emergency response efforts and work associated with recovering from the disaster. Public Assistance is typically the most costly disaster assistance provided. FEMA’s Hazard Mitigation Grant Program provides grants to states, local governments, and Indian tribes for long-term hazard mitigation projects after a major disaster declaration. The purpose of the program is to reduce the loss of life and property in future disasters by funding mitigation measures during the recovery phase of a natural disaster. FEMA’s Individual Assistance Program includes among other things, a crisis counseling program, disaster legal services, and direct and financial assistance through the IHP. The purpose of the crisis counseling program is to help relieve any grieving, stress, or mental health problems caused or aggravated by the disaster or its aftermath. FEMA also provides free legal counseling through an agreement with the Young Lawyers Division of the American Bar Association for low- income individuals regarding cases that will not produce a fee. FEMA provides direct (temporary housing units) and financial assistance (grant funding for temporary housing and other disaster-related needs) to individuals and households through the IHP to meet necessary expenses and serious needs of eligible disaster victims who, as a direct result of a major disaster, have uninsured or under insured necessary expenses and serious needs and are unable to meet such needs through other means. Under the IHP, there are two programs which are referred to as the Housing Assistance program and the Other Needs Assistance (ONA) program. The Housing Assistance program provides financial assistance for such things as rental housing, home repair assistance (up to $5,000), and home replacement assistance (up to $10,000). In addition, for disaster victims for whom rental accommodations are not available under the Housing Assistance program, FEMA may provide “direct assistance” in the form of temporary housing units (e.g., mobile homes and travel trailers), that FEMA has acquired by purchase or lease. The ONA program also includes financial assistance for medical, dental, funeral, personal property, transportation, and other disaster-related expenses that are not compensated by other means. The IHP is not intended to fully compensate disaster victims for all losses from damage to real and personal property that resulted from the disaster or to provide sufficient funds to restore damaged property to its condition before the disaster. Rather, IHP is intended to provide assistance in covering expenses not covered by other means, such as insurance claims and payments or the victim’s own savings and resources. The maximum amount that an individual or household may receive is statutorily capped at $25,000, adjusted annually to reflect changes in the Consumer Price Index. In addition to the financial cap, IHP assistance is also limited to 18 months beginning on the date the President declares a major disaster. However, the President may extend this 18- month period if the President determines that due to extraordinary circumstances an extension would be in the public interest. Eligibility for IHP assistance is determined when an individual or household applies with FEMA and is based on the amount of property damage. To qualify for Housing Assistance, a disaster victim must: have experienced losses in an area that has been declared a disaster by have uninsured (or underinsured) needs that cannot be met through be a citizen of the United States, a non-citizen national, or a qualified alien, or have a qualifying individual who lives with the disaster victim; have been living or usually live in the home in the disaster area at the time of the disaster; and be unable to live in the home, cannot get to their home due to the disaster, or the home requires repairs because of damage from the disaster. If a disaster victim is eligible for housing assistance from FEMA based upon the above criteria, grant funds can be used for housing assistance purposes. Individuals or households who receive the assistance may be asked to show receipts to prove that it was used for eligible housing expenses. If an individual is unable to find a rental house or apartment within a reasonable commuting distance of their damaged home, FEMA may provide direct assistance in the form of a travel trailer or mobile home. Direct or financial housing assistance from FEMA does not require that an applicant file for an Small Business Administration (SBA) disaster loan and is 100 percent federally funded and administered by the federal government. While the financial housing assistance is subject to the $25,000 cap, the cost of direct housing assistance is not subject to the cap. In contrast, ONA grants are provided in a cost-shared partnership between FEMA and the state. As part of this partnership, FEMA and the state engage in annual coordination efforts to determine how the ONA will be administered in any presidentially-declared disaster in the coming year. For example, the state establishes award levels related to vehicle repairs, vehicle replacement, and funeral grants. States may choose the level of involvement of state officials in administering the program and assume complete, partial, or no responsibility for administering the program. Whichever option a state chooses, FEMA provides 75 percent of the grant funds, and the state is obligated to provide the balance of ONA grant funds. To receive ONA grant funds, an applicant must generally meet the eligibility requirements for housing assistance, must have necessary expenses or serious needs because of the disaster, and must first apply to the SBA Disaster Loan Program and either be declined for assistance, or demonstrate that SBA disaster assistance is insufficient to meet all disaster-related necessary expenses and serious needs. Applicants who do not meet a certain income threshold may be excused from the requirement to complete the SBA disaster loan application. For example, in 2005, a household of four with an income less than $24,188 would not be required to complete the SBA loan application. The types of assistance that may be provided depending on the level of the applicant’s income are for personal property, transportation, and moving/storage expenses. Eligibility for medical, dental, funeral and other/miscellaneous expenses is not dependent on an applicant’s income; for these categories applicants are referred directly to ONA for assistance. Specifically, FEMA may provide ONA grant funding to replace personal property, repair and replace vehicles, and reimburse moving and storage expenses if an applicant is ineligible for an SBA disaster loan. To receive ONA grants, for public transportation, medical and dental, and funeral and burial expenses, disaster victims are not required to apply for an SBA loan to be eligible and income levels are not subject in determining eligibility. FEMA manages the IHP primarily through a de-centralized structure of permanent and temporary field offices staffed primarily by contract and temporary employees. The offices include the FEMA Recovery Division in FEMA Headquarters, regional offices, National Processing Service Centers, Joint Field Offices, Area Field Offices, and Disaster Recovery Centers. The Stafford Act authorizes FEMA to draw upon temporary personnel for disaster operations. FEMA’s Recovery Division in Washington, D.C., manages the IHP and as of August 2006 had about 15 people to develop and issue policies and procedures for implementing the individual assistance programs. Eight members of that staff are specifically responsible for managing the IHP. In FEMA’s 10 regional offices, one or two full-time employees manage individual assistance programs. The regional office staff may participate in the preliminary disaster assessment after a disaster to determine what individual assistance is needed. FEMA’s National Processing Service Centers (NPSC) provide centralized disaster application service to FEMA customers and help coordinate with other assistance programs. The centers are to provide an automated “teleregistration” service—a toll-free phone bank through which disaster victims apply for IHP assistance and through which their applications are processed and their questions answered. The NPSCs are also to assist with referrals to other assistance programs, process appeals, recertify existing rental assistance, assist with recovering funds, and respond to congressional inquiries. As of August 2006, a total of 13 permanent FEMA employees were working at the NPSCs in the United States and were supported by several hundred temporary employees (whose numbers can be increased by 2,000 to 3,000 additional temporary employees for application processing after a disaster), as well as contract employees. FEMA operates four NPSCs in Denton, Texas; Puerto Rico; Winchester, Virginia; and Hyattsville, Maryland. The Texas NPSC is in charge of caller services including call centers, and the agency’s quality control program. (Although all NPSCs have call centers within their offices, the Texas NPSC is in charge of the general policies and procedures for those call centers, and also sets up arrangements with the IRS and private companies when FEMA needs to handle added call volume.) The Puerto Rico NPSC is also a call center, with a specialty in handling calls from Spanish speaking applicants. This center has oversight from the Texas NPSC. The Virginia NPSC is the central point of contact for the National Emergency Management Information System, the main database/automated processing system for IHP application and benefits determination and processing, the NPSC Coordination Team, and the Inspection Management contracts. The Maryland NPSC is responsible for oversight of all mail operations and receives management oversight from the Virginia NPSC. At FEMA’s Inspections Services Section, located in the Virginia NPSC, as of August 2006, one permanent and approximately 35 to 40 temporary FEMA employees oversee the work of two firms with standing contracts to perform inspection services. Each firm has about 2,000 inspectors who visit applicants’ homes to verify disaster-related damages to real and personal property. Temporary FEMA field locations are established after a disaster occurs. FEMA deploys about 600-700 “reservists” or disaster assistance employees who are deployed at field offices at the state and local levels to augment full time FEMA staff temporarily re-assigned from FEMA headquarters and regional offices. The Joint Field Office is to serve as the temporary headquarters for disaster response and recovery efforts and is typically located in the capital of the state where a disaster occurred or in the high impact area. The joint office houses FEMA, state partners, other federal agencies, and voluntary agencies. Two key FEMA joint field office officials direct and coordinate disaster response and recovery operations for program implementation at the local level. The Federal Coordinating Officer is responsible for assessing disaster needs, establishing the joint office and Disaster Recovery Centers and other possible disaster facilities, and coordinating the administration of disaster relief. The FEMA operations section chief’s responsibilities include managing the Human Services Branch that oversees provision of mass care and food, individual assistance, the coordination of voluntary agency contributions, and donations. The role of regional coordinating structures varies depending on the situation. Many incidents may be coordinated by regional structures primarily using regional assets. Larger, more complex incidents may require direct coordination between the joint office and the national level, with regional structures continuing to play a supporting role. The focal point for coordination of federal support is the joint field office. FEMA may also establish Area Field Offices whose staff and organization is to mirror the joint field office and provide similar coordination and oversight in support of the joint office at the local level. The area office reports to the joint office. The area office’s operational responsibilities are to be delineated by the joint office which may establish as many area filed offices as deemed necessary and efficient to the response. FEMA Disaster Recovery Centers are offices where applicants may go for information about FEMA and other disaster assistance programs. Recovery center locations are usually announced in local newspapers and on local television and radio stations and are established close to the disaster area, often in schools or armories to be readily accessible to those in need of assistance. The centers are temporary facilities jointly operated by the state and FEMA where representatives of federal agencies, local and state governments, and voluntary relief organizations provide guidance regarding disaster recovery and literature on services available, including housing assistance and individual and household grants information, educational materials, crisis counseling, assistance in completing applications and answers to questions, resolution to problems, and referrals to agencies that may provide further assistance. The number of centers depends on the magnitude of the disaster and the size of the area included in the declaration. Under the Stafford Act, the federal government provides disaster assistance after a presidential disaster declaration. A presidential disaster declaration results from a legal process involving specific steps taken by local, state, and federal governments as generally shown in figure 8. After a disaster occurs and the state determines that effective response may exceed both state and local resources, a state is to first request a preliminary damage assessment. Teams are assembled from individuals from FEMA, the Small Business Administration, state emergency management, and the local jurisdiction who are to (1) assess the types of dwellings affected, (2) assess the probable insurance and income levels of residents, and (3) estimate the number of individuals affected to determine potential funding requirements. After the assessment is complete, the Governor is to determine if federal disaster assistance is needed and, if it is, he or she is to submit a request to the President through the FEMA Regional Director who reviews and communicates the request to FEMA’s Headquarters within the Emergency Preparedness and Response Directorate. The Directorate’s Undersecretary is to then make a recommendation to the President, who makes the final decision to declare a major disaster, an emergency, or deny the request for federal assistance. Once the President declares a disaster and decides to provide federal disaster assistance, disaster victims in declared counties must first apply for assistance with FEMA, by phone, in person at a disaster recovery center, or over the Internet. Typically, an application period is closed 60 days following the date of the disaster declaration. During the application process, an individual provides personal information including Social Security number, current and pre-disaster address, a telephone number, insurance information, total household annual income, and a description of losses caused by the disaster. After the submission of an application, FEMA provides applicants with a copy of their application and a program guide, “Help After a Disaster: Applicant’s Guide to the Individuals and Households Program.” The document, whose cover is shown in figure 9, is also available on the Internet. Once a FEMA representative records personal information from a disaster application and provides the applicant with a FEMA application number, FEMA’s National Emergency Management Information System automatically determines potential eligibility for designated categories of assistance. To confirm that damages occurred to the home and personal property as reported in disaster assistance applications, FEMA is to conduct individual inspections to verify damage, ownership, and occupancy. Contract inspectors are to schedule damage inspection appointments with applicants. The inspections usually take about 30 to 60 minutes, according to FEMA. Homeowners are not required to be at home at the time of the inspection, but a designated representative generally must be present and the applicant must be able to provide proof of ownership and occupancy to the inspector. This assessment provides a basis to determine how much assistance an individual/household should receive for housing repair and replacement and for other needs. If an applicant’s home or its contents were damaged and the applicant has insurance, they must provide FEMA with a letter from the insurance company regarding the settlement of the claim before FEMA issues its inspection report. (If the damages are caused by flooding and the applicant has flood insurance, FEMA will issue an inspection report before receiving a copy of the applicant’s flood insurance decision letter because temporary living expenses are not covered by flood insurance.) According to FEMA, the system reportedly determines eligibility for about 90 percent of applicants requesting housing assistance, usually within 10 days of application. FEMA caseworkers are to process the remaining applications that cannot be automatically processed, to determine an applicant’s eligibility for disaster assistance based on additional documentation, for example, documentation of insurance payment; these applications may take longer to process. If the applicant qualifies for a grant, FEMA sends the applicant a check by mail or deposits the funds granted in the applicant’s bank account. FEMA will also send a letter describing how the applicant is to use the money (for example; repairs to their home or to rent another house while the applicant makes repairs). Recipients of IHP assistance must recertify their continuing need for assistance every 30 to 90 days, depending on the individual circumstances. FEMA uses three criteria to recertify the applicant. First, FEMA may provide continued housing assistance (travel trailers or rental assistance) during the period of assistance, based on need, and generally only when adequate, alternate housing is not available. Second, for rental assistance, the applicant must show that he or she used the previous rental assistance to pay rent by sending copies of receipts. Third, the applicant must show he or she is working to find permanent housing that the applicant can afford. For example, FEMA is to require applicants to show they are actively seeking affordable housing, maintain a list of addresses they looked at, including the landlord’s name and phone number, and specify the reason(s) for not renting the units. A FEMA Housing Adviser may verify with landlords that a contact was made by an applicant seeking a rental unit. Conversely, if FEMA determines that the applicant does not qualify for an IHP grant, it is to send the applicant a letter explaining why the applicant was turned down and gives the applicant a chance to appeal the decision. Applicants who are denied housing and other needs assistance under IHP have 60 days from the date that FEMA notifies the applicant to appeal the decision. According to FEMA, common reasons for denial include: Adequate insurance coverage. Damage to secondary home, not a primary residence. Duplicate applications made from the same address. Inability to prove occupancy or ownership. More information is needed before the analysis can be completed. GAO, Expedited Assistance for Victims of Hurricanes Katrina and Rita: FEMA’s Control Weaknesses Exposed the Government to Significant Fraud and Abuse, GAO-06-403T (Washington, D.C.: Feb. 13, 2006). William Jenkins, Director, Homeland Security & Justice Issues (202) 512- 8757 (jenkinswo@gao.gov) and Greg Kutz, Managing Director, GAO Forensic Audits and Special Investigations, (202) 512-7455 (kutzg@gao.gov) In addition to the contacts named above, the following individuals from GAO’s Forensic Audits and Special Investigations and GAO’s Homeland Security and Justice Team also made contributions to this report: Kord Basnight, James Berry Jr., Gary Bianchi, Valerie Blyther, Matthew Brown, Norman Burrell, Willie Commons, Jennifer Costello, Christine Davis, Katherine Davis, Paul Desaulniers, Steve Donahue, Dennis Fauber, Christopher Forys, Adam Hatton, Aaron Holling, William O. Jenkins Jr., Chris Keisling, Jason Kelly, John Kelly, Sun Kim, Stan Kostyla, Crystal Lazcano, Tram Le, John Ledford, Jennifer Leone, Barbara Lewis, Gary M. Malavenda, Marvin McGill, Jonathan Meyer, Gertrude Moreland, Richard Newbold, Kristen Plungas, Jennifer Popovic, John Ryan, Sidney Schwartz, Robert Sharpe, Gail Spear, Tuyet-Quan Thai, Patrick Tobo, Matthew Valenta, Tamika Weerasingha, and Scott Wrightson. Financial assistance to address the dental costs. Financial assistance to address the cost of funeral services, burial, cremation, and other funeral expenses related to a death caused by the disaster. Expedited assistance provides fast track money in the form of $2,000 in expedited payments to eligible disaster victims to help with immediate, emergency needs of food, shelter, clothing and personal necessities. FEMA changed the maximum amount from $2,000 to $500 on July 24, 2006. Money to address the cost of funeral services, burial, cremation, and other funeral expenses related to a death caused by the disaster. Financial assistance provided to replace the primary residence of an owner-occupied dwelling if the dwelling was damaged by the disaster and there was at least $10,000 of damage (as adjusted annually to reflect changes in the CPI). The applicant may either replace the dwelling in its entirety for $10,000 (as adjusted annually to reflect changes in the CPI) or less, or may use the assistance toward the cost of acquiring a new permanent residence that is greater in cost than $10,000 (as adjusted annually to reflect changes in the CPI). Financial assistance provided for the repairs of uninsured disaster-related damages to an owner’s primary residence. The funds are to help return owner-occupied primary residences to a safe and sanitary living or functioning condition. Repairs may include utilities and residential infrastructure damaged by a major disaster. The ONA Program is designed for those with serious needs who have no other source of assistance. The program covers necessary expenses such as uninsured personal property, medical and dental expenses and funeral expenses. Expenses for reasonable short-term accommodations that individuals or households incur in the immediate aftermath of a disaster. Lodging expenses may include but are not limited to the cost of brief hotel stays. Financial assistance to address the cost of medical treatment or the repair or replacement of medical equipment required as a result of the disaster. Financial assistance to address necessary expenses and serious needs related to moving and storing personal property to avoid additional disaster damage. The cost associated with acquiring an item or items, obtaining a service, or paying for any other activity that meets a serious need. Financial assistance to address the cost of other specific disaster-related necessary expenses and serious needs of individuals and households. Financial assistance to address the cost of repairing and/or replacing disaster damaged items, such as furniture, bedding, appliances, and clothing. A mechanism used to determine the impact and magnitude of damage and the resulting unmet needs of individuals, businesses, the public sector, and the community as a whole. As part of IHP housing assistance, rental assistance funds address the cost renting another place to live. For homeowners, this money may be provided in addition to home repair, if needed. The requirement for an item, or service, that is essential to an applicant’s ability to prevent, mitigate, or overcome a disaster-related hardship, injury or adverse condition. Transitional Housing Assistance is a cash grant of up to $2,358 per household intended to cover an initial 3 months of rental payments for eligible applicants. Transitional Housing Assistance is a form of rental assistance and was implemented for the first time in selected disaster areas in Louisiana and Mississippi during Hurricane Katrina. Financial assistance for public transportation and any other transportation related costs or expense and the cost of repairing and/or replacing a disaster damaged vehicle that is no longer usable because of disaster- related damage. Small Business Administration: Actions Needed to Provide More Timely Disaster Assistance. GAO-06-860. Washington, D.C.: July 28, 2006. Individual Disaster Assistance Programs: Framework for Fraud Prevention, Detection, and Prosecution. GAO-06-954T. Washington, D.C.: July 12, 2006. Expedited Assistance for Victims of Hurricanes Katrina and Rita: FEMA’s Control Weaknesses Exposed the Government to Significant Fraud and Abuse. GAO-06-655. Washington, D.C.: June 16, 2006. Hurricanes Katrina and Rita Disaster Relief: Improper and Potentially Fraudulent Individual Assistance Payments Estimated to Be Between $600 Million and $1.4 Billion. GAO-06-844T. Washington, D.C.: June 14, 2006. Hurricanes Katrina and Rita: Coordination between FEMA and the Red Cross Should Be Improved for the 2006 Hurricane Season. GAO-06-712. Washington, D.C.: June 8, 2006. Hurricane Katrina: Improving Federal Contracting Practices in Disaster Recovery Operations. GAO-06-714T. Washington, D.C.: May 4, 2006. Hurricane Katrina: Planning for and Management of Federal Disaster Recovery Contracts. GAO-06-622T. Washington, D.C.: April 10, 2006. Hurricane Katrina: Comprehensive Policies and Procedures Are Needed to Ensure Appropriate Use of and Accountability for International Assistance. GAO-06-460. Washington, D.C.: April 6, 2006. Hurricane Katrina: Policies and Procedures Are Needed to Ensure Appropriate Use of and Accountability for International Assistance. GAO-06-600T. Washington, D.C.: April 6, 2006. Agency Management of Contractors Responding to Hurricanes Katrina and Rita. GAO-06-461R. Washington, D.C.: March 15, 2006. Hurricane Katrina: GAO’s Preliminary Observations Regarding Preparedness, Response, and Recovery. GAO-06-442T. Washington, D.C.: March 8, 2006. Expedited Assistance for Victims of Hurricanes Katrina and Rita: FEMA’s Control Weaknesses Exposed the Government to Significant Fraud and Abuse. GAO-06-403T. Washington, D.C.: February 13, 2006. Statement by Comptroller General David M. Walker on GAO’s Preliminary Observations Regarding Preparedness and Response to Hurricanes Katrina and Rita. GAO-06-365R. Washington, D.C.: February 1, 2006. Hurricanes Katrina and Rita: Provision of Charitable Assistance. GAO-06-297T. Washington, D.C.: December 13, 2005. Hurricanes Katrina and Rita: Preliminary Observations on Contracting for Response and Recovery Efforts. GAO-06-246T. Washington, D.C.: November 8, 2005.
In 2005, Hurricanes Katrina and Rita caused unprecedented damage. The Federal Emergency Management Agency's (FEMA's) Individuals and Households Program (IHP), provides direct assistance (temporary housing units) and financial assistance (grant funding for temporary housing and other disaster-related needs) to eligible individuals affected by disasters. Our objectives were to (1) compare the types and amounts of IHP assistance provided to Hurricanes Katrina and Rita victims to other recent hurricanes, (2) describe the challenges FEMA faced by the magnitude of the requests for assistance following Hurricanes Katrina and Rita, and (3) determine the vulnerability of the IHP program to fraud and abuse. GAO determined the extent to which the program was vulnerability to fraud and abuse, by conducting statistical sampling, data mining and undercover operations. For Hurricanes Katrina and Rita, FEMA received more than 2.4 million applications for IHP assistance and distributed $7.0 billion as compared to the six hurricanes that hit the United States in the prior two years and totaled about 1.5 million applications and about $1.5 billion in assistance, respectively. Temporary housing assistance and expedited assistance accounted for much of the increase in IHP expenditures as compared to prior years. Overall, however, although the number of applications was much higher, the percentage approved for non-housing assistance was notably lower for Hurricanes Katrina and Rita than in 2003 and 2004. The magnitude of Hurricanes Katrina and Rita posed challenges in providing assistance to an unprecedented number of victims many of whom were widely dispersed across the country. To address these challenges, FEMA developed new approaches and adapted existing approaches to quickly provide assistance and improve communication with victims. Despite these efforts, management challenges in staffing and training and program restrictions limited the effectiveness and efficiency of the disaster assistance process. FEMA has proposed a number of initiatives to address these problems, but it is too early to determine whether these efforts will effectively address the problems identified. GAO identified the potential for significant fraud and abuse as a result of FEMA's management of the IHP in response to Hurricanes Katrina and Rita. Flaws in the registration process resulted in what GAO estimated to be between $600 million and $1.4 billion in improper and potentially fraudulent payments due to invalid registration data. In addition, duplicate payments were made and FEMA lacked accountability over $2,000 debit cards that were given to disaster victims.
As we have previously testified, legislative proposals involving substantial long-term costs and commitments should be considered in the context of the serious fiscal challenges facing this country. The federal government’s liabilities and commitments have grown from $20.4 trillion to $43.3 trillion from fiscal year 2000 through fiscal year 2004. This amount continues to increase due to continuing deficits, known demographic trends, and compounding interest costs. Furthermore, our long-range budget simulations show that this nation faces a large and growing structural deficit. Given the size of our projected deficit, we will not be able to eliminate the deficit through economic growth alone. The long-term fiscal pressures created by the impending retirement of the baby boom generation, rising health care costs, and increased homeland security and defense commitments intensify the need to weigh existing federal budgetary resources against emerging new priorities. In our 21st Century Challenges report, we noted that it is time for a baseline review of all major federal programs and policies, including the military’s reserve components. We have previously reported on a number of military force management issues in the active and reserve components, including roles and missions of the Army and Air National Guard and the Army Reserve and the process for assessing the numbers of active duty military forces. We have also reported on a number of military personnel issues, including military compensation, health care, and recruiting and retention. In each of these areas, questions have arisen as to whether DOD has the right strategies to cost effectively sustain the total force in the future. In the case of the National Guard, how this is accomplished is of particular importance in light of its dual missions of supporting overseas operations as well as its considerable responsibilities in its state and homeland security roles. The National Guard of the United States consists of two branches: the Army National Guard and the Air National Guard. The National Guard Bureau is the federal entity responsible for the administration of both the Army National Guard and the Air National Guard. The Army National Guard, which is authorized 350,000 soldiers, makes up more than one-half of the Army’s ground combat forces and one-third of its support forces (e.g., military police and transportation units). Army National Guard units are located at more than 3,000 armories and bases in all 50 states and 4 U.S. territories. Traditionally, the majority of Guard members are employed on a part-time basis, typically training 1 weekend per month and 2 weeks per year. The Guard also employs some full-time personnel who assist unit commanders in administrative, training, and maintenance tasks. In the past 2 years, the Army National Guard has faced increasing challenges in recruiting new soldiers to fill authorized positions. Army National Guard personnel may be ordered to duty under three general statutory frameworks – Titles 10 or 32 of the United States Code or pursuant to state law in a state active duty status. In a Title 10 status, Army National Guard personnel are federally funded and under federal command and control. Personnel may enter Title 10 status by being ordered to active duty, either voluntarily or involuntarily (i.e., mobilization) under appropriate circumstances. When Army National Guard forces are activated under Title 10, the National Guard is subject to the Posse Comitatus Act, which prohibits it from law enforcement activities unless expressly authorized by the Constitution or law. Personnel in Title 32 status are federally funded but under state control. Title 32 is the status in which National Guard personnel typically perform training for their federal mission. In addition, the federal government reimburses states for Guard units’ activities in response to federally- designated disasters, such as hurricane response. Personnel performing state missions are state funded and under state command and control. Under state law, a governor may order National Guard personnel to respond to emergencies, civil disturbances, or perform other duties authorized by state law. While the Army National Guard performs both federal and state missions, the Guard is organized, trained, and equipped for its federal missions, and these take priority over state missions. The Guard can also be tasked with homeland security missions under the state governors or, when activated, by DOD under command of the President. DOD refers to its contributions to the overall homeland security effort as “homeland defense.” Homeland defense activities include military missions within the United States, such as flying armed patrols over U.S. cities and guarding military installations. DOD also supports civilian authorities to provide quick response or capabilities that other agencies do not have. The U.S. Northern Command provides command and control for DOD’s homeland defense missions, including land, air, aerospace, and maritime defense operations, and coordinates DOD’s support to civil authorities for homeland security missions. As we previously reported, the high number of Army National Guard forces used to support overseas and homeland missions since September 11, 2001, has resulted in decreased preparedness of nondeployed Guard forces which suggests the need to reassess DOD’s business model for the Army National Guard. We have previously reported that high-performing organizations must reexamine their business models to ensure that their structures and investment strategies enable them to meet external changes in their operational environments efficiently and effectively. To meet the demand for forces since September 11, especially for forces with special skills that reside heavily in the Army National Guard, such as military police, over 50 percent of Army National Guard members have been called upon to deploy. At the same time, the Army National Guard’s involvement in operations at home has taken on higher priority since 2001. The change in the roles and missions of the Army National Guard has not been matched with a change in its equipping strategy that reflects its new high pace of operations, and as a result the Army National Guard’s ability to continue to support ongoing operations is declining. In keeping with post-Cold War planning assumptions, most Army National Guard units were not expected to deploy in the early days of a conflict, but to augment active duty units in the event of an extended conflict. Therefore, the Army accepted some operational risk by providing the Army National Guard fewer soldiers than it would need to fully equip its units and less equipment than it would need to deploy, on the assumption that there would be time to provide additional personnel, equipment, and training during the mobilization process before units would deploy. For example, as of 2004, the Army National Guard’s force structure called for about 375,000 soldiers, but it was authorized about 350,000 soldiers. In addition, Army National Guard combat units are only provided from 65 to 74 percent of the personnel and from 65 to 79 percent of the equipment they would need to deploy, depending on the priority assigned to their warfighting missions. However, after September 11, 2001, the President authorized reservists to be activated for up to 2 years, and approximately 280,000 Army National Guard personnel have been activated to support recent operations. As of July 2005, about 35,500 Army National Guard members were deployed to Iraq—nearly one-third of the 113,000 U.S. forces in theater. Army National Guard personnel deployed to Afghanistan and Iraq are expected to serve 1 year in these countries and to spend up to several additional months mobilizing and demobilizing. As figure 1 shows, the number of activated Army National Guard personnel for federal missions has declined since its peak in December 2004 and January 2005. However, the Army National Guard continues to provide a substantial number of personnel to support current operations. The Army National Guard has begun adapting its forces to meet the warfighting requirements of current operations, but some measures taken to meet immediate needs have made sustaining future operations more challenging. Because its units did not have all the resources they needed to deploy at the outset of current operations, the Army National Guard has had to transfer personnel and equipment from nondeploying units to prepare deploying units. We reported in November 2004 that as of May 2004, the Army National Guard had performed over 74,000 personnel transfers and shifted over 35,000 pieces of equipment to deploying units. These initial transfers worsened personnel and equipment shortages in units that were then alerted for deployment and had to be staffed and equipped through more transfers. The cumulative effect of these personnel and equipment transfers has been a decline in the readiness of Army National Guard forces for future missions, both at overseas and at home. Even as significant numbers of personnel and equipment are supporting overseas operations, since September 11, 2001, the Army National Guard’s role in homeland security and civil support has taken on greater priority, as demonstrated by the Guard’s recent involvement in responding to Hurricane Katrina. Since September 11, 2001, the Guard has performed other operational duties such as providing airport security and supporting events such as the 2004 Democratic and Republican national conventions. In the pre-September 11 security environment, it was assumed that the National Guard could perform its domestic roles with the personnel and equipment it was supplied for its warfighting missions. While the Army National Guard is implementing pilot programs to strengthen capabilities to respond to homeland security needs, such as improving critical infrastructure protection, there has been no comprehensive analysis of the full spectrum of the Guard’s roles and requirements for homeland security, as we recommended. Until such an analysis is completed, congressional policymakers may not be in the best position to assess whether the Army National Guard’s current structure and equipment can enable it to sustain increased homeland security responsibilities in addition to its overseas missions. Increasing equipment shortages among nondeployed Army National Guard units illustrate the need for DOD to reexamine its equipping strategy and business model for the Army National Guard. The amount of essential warfighting equipment nondeployed National Guard units have on hand has continued to decrease since we last reported on the Army National Guard in 2004. Compounding the equipment shortages that have developed because most Army National Guard units are still structured with lesser amounts of equipment than they need to deploy, Army National Guard units have left more than 64,000 equipment items valued at over $1.2 billion in Iraq for use by follow-on forces; however, the Army has not developed replacement plans for this equipment as required by DOD policy. In addition, DOD has not determined the Army National Guard’s equipment requirements for homeland security missions, and some states are concerned about the Guard’s preparedness for future missions. While most Army National Guard combat units are typically provided from 65 to 79 percent of the equipment they would need for their wartime missions, for recent operations, combatant commanders have required units to deploy with 90 to100 percent of the equipment they are expected to need and with equipment that is compatible with active Army units. While the Army can supply deploying Army National Guard forces with additional equipment after they are mobilized, nondeployed Guard units will be challenged to maintain readiness for future missions because they transferred equipment to deploying units and have less equipment to train with or to use for other contingencies. The Army National Guard began transferring people and equipment to ready units deploying to Iraq and Afghanistan in the early days of the Global War on Terrorism and the number of transfers has grown as overseas operations have continued. In June 2004 the Army National Guard had transferred more than 35,000 pieces of equipment to ready units for overseas operations. By July 2005, the number of equipment items transferred among Army National Guard units had grown to more than 101,000 items. As a result of these transfers, the proportion of nondeployed units that reported having the minimum amount of equipment they would need to deploy dropped from 87 percent in October 2002 to 59 percent in May 2005. However, Army National Guard officials estimated that when substitute items which may be incompatible with active forces, equipment undergoing maintenance, and equipment left overseas for follow-on forces are subtracted, nondeployed units had only about 34 percent of their essential warfighting equipment as of July 2005. Further, as of July 2005, the Army National Guard reported that it had less than 5 percent of the required amount or a quantity of fewer than 5 each of more than 220 critical items. Among these 220 high-demand items were generators, trucks, and radios, which could also be useful for domestic missions. To address equipment requirements for current overseas operations, the Army now requires units, in both the active and reserve components, to leave certain essential items that are in short supply in Iraq for follow-on units to use, but it has not developed plans to replace Army National Guard equipment as DOD policy requires. The Army’s requirement for leaving equipment overseas is intended to reduce the amount of equipment that has to be transported from the United States to theater, to better enable units to meet their deployment dates, and to maintain stocks of essential equipment in theater where it is most needed. While this equipping approach has helped meet current operational needs, it has continued the cycle of reducing the pool of equipment available to nondeployed forces for responding to contingencies and for training. The Army National Guard estimates that since 2003, it has left more than 64,000 equipment items valued at over $1.2 billion overseas to support continuing operations, but the Army lacks visibility and cannot account for all this equipment and has not developed plans to replace it. According to Army officials, even though DOD policy requires the Army to replace equipment transferred to it from the reserve component for more than 90 days, the Army neither created a mechanism in the early phases of the war to track Guard equipment left in theater nor prepared replacement plans for this equipment because the practice of leaving equipment behind was intended to be a short-term measure. As operations continued, in June 2004, the Army tasked the Army Materiel Command with overseeing equipment retained in theater. However, according to Army and National Guard officials, the Army Materiel Command developed plans to track only certain high-demand equipment items that are in short supply, such as armored humvees and other items designated to remain in theater for the duration of the conflict. As of July 2005, the National Guard Bureau estimates that the Army Material Command was only tracking about 45 percent of the over 64,000 equipment items the Army National Guard units have left in theater. The tracking effort does not include over half of the equipment items, such as cargo trucks, rough terrain forklifts, and palletized load trucks Guard units have left behind that were only documented at the unit level through unit property records, even though these items may remain in theater for up to 3 years. As a result, the Guard does not know when or whether its equipment will be returned, which challenges its ability to prepare and train for future missions. As operations have continued, the amount of Guard equipment retained in theater has increased and has hampered the ability of returning Guard units to maintain a high level of readiness and train new personnel. For example, according to Army National Guard officials, three Illinois Army National Guard military police units were required to leave almost all of their humvees, about 130, in Iraq when they returned home from deployment, so they could not conduct training to maintain the proficiency they acquired while overseas or train new recruits. In all, the National Guard reported that 14 military police companies left over 600 humvees and other armored trucks overseas, and these items are expected to remain in theater for the duration of operations. In May 2005, the Assistant Secretary of Defense for Reserve Affairs expressed concerns about the significant amount of equipment Army National Guard units have left overseas and directed the Army to develop replacement plans as required by DOD policy. The Army expects to complete its plans to replace stay behind equipment by October 2005. While Army officials have stated that the equipment tracked by individual units may eventually be returned to the Guard, both Army and Army National Guard officials said that even if this equipment is eventually returned, its condition is likely to be poor given its heavy use and some of it will likely need to be replaced. Until the Army develops plans to replace the equipment, including identifying timetables and funding sources, the National Guard will continue to face critical equipment shortages that reduce its readiness for future missions and it will be challenged to train and prepare for future missions. In the report we are publishing concurrently with the testimony, we recommended that DOD develop and submit to the Congress a plan and funding strategy that address the equipment needs of the Army National Guard for the Global War on Terrorism and how the Army will transition from short-term equipping measures to long-term equipping solutions. DOD agreed with this recommendation, stating in its written comments that the Army needs to determine how Army National Guard forces will be equipped to meet state disaster response and potential homeland defense requirements and include these requirements in its resource priorities. We believe that such a plan should address the measures the Army will take to ensure it complies with existing DOD directives to safeguard reserve component equipment readiness. While Army National Guard forces have supported a range of homeland security missions since September 11, 2001, states are concerned about the Guard’s ability to perform future domestic missions given its declining equipment status. For example, New Jersey officials told us that Army National Guard units lacked some essential equipment, such as chemical protective suits and nerve agent antidotes; they needed to respond to a terrorist threat in December 2003. More recently, Louisiana Army National Guard units lacked some key items they needed to conduct large-scale disaster response. According to National Guard officials, at the time Hurricane Katrina hit the Gulf coast, much of the Guard’s most modern equipment was deployed to Iraq while less capable equipment remained in the United States. We are currently examining the federal response to Hurricane Katrina, including the roles of DOD’s active duty and reserve forces. At the time of the hurricane over 8,200 personnel and two brigade sets of equipment from the 155th Armored Brigade of Mississippi and the 256th Infantry Brigade of Louisiana were deployed in support of Operation Iraqi Freedom and were not available to perform their domestic missions. Furthermore, the Adjutant General of Louisiana reported to the Army National Guard in August 2005 that based on their analysis of the state Guard’s equipment for state missions, even after the 256th Infantry Brigade returned home from deployment, the brigade would lack about 350 essential equipment items needed for hurricane response including trucks, humvees, wreckers, and water trailers because it was required to leave a majority of its equipment items in Iraq. When we visited the area in October 2005, Louisiana National Guard officials particularly noted that more radios would have enabled them to communicate with other forces and more vehicles that could be used in high water would have been very helpful. Louisiana and Mississippi, like many other states, have entered into mutual assistance agreements with other states to provide additional National Guard forces in times of need, typically to facilitate natural disaster response. Under such agreements, in August and September 2005, over 50,000 National Guard personnel from 48 states, 2 U.S. territories and the District of Columbia responded to the devastation caused by Hurricanes Katrina and Rita in the Gulf Coast region. According to Louisiana officials, state partners were proactive in identifying troops to send to the area when the magnitude of the storm was anticipated. These forces brought with them additional equipment such as key command and control equipment and aviation assets. DOD, and the Army have recognized the need to transform the Army National Guard to meet the new threats of the 21st century and support civil authorities, and are undertaking some initiatives to improve the Guard’s organization and readiness for these missions. However, it is too early to determine whether these initiatives together comprise a sustainable equipping and funding model for the future because implementation plans are not complete and funding strategies have not been fully identified. For example, the Army has not decided how to manage equipment to ready forces as they move through the proposed rotational force model. In addition, while DOD has produced a strategy for homeland defense and civil support in June 2005, it has not yet completed a plan to implement that strategy, including clarifying the Army National Guard’s role and assessing what capabilities the Guard will require for domestic missions, as we previously recommended. Until these initiatives are more fully developed and key implementation decisions are made, DOD and the Congress will not be in a sound position to weigh their affordability and effectiveness, and the Army National Guard will be challenged to train and prepare for all its future missions. In 2004, the Army developed a plan to restructure Army forces, including the Army National Guard, to become more flexible and capable of achieving a wide range of missions, but it has not yet completed detailed implementation plans or cost estimates for its transformation. Rather than being organized around divisions, the Army will transform to an organization based on standardized, modular brigades that can be tailored to meet the specific needs of the combatant commander. Two primary goals of this new structure are to standardize designs and equipment requirements for both active and reserve units and maintain reserve units at a higher level of readiness than in the past. While the Army plans to convert most Army National Guard units to the modular organizational structure by 2008, Guard forces will not be fully equipped for the new design until 2011 at the earliest. The Army had originally planned to convert Guard units on a slower schedule by 2010, but at the request of the Army National Guard, accelerated the conversions so that Guard units would share the new standardized organizational designs with the active component at least 2 years earlier, which is expected to help avoid training soldiers for the previous skill mix and better facilitate recruiting and retention efforts. However, our work indicates that accelerated modular conversions will exacerbate near-term equipment shortfalls for three key reasons. First, according to current plans, units will be expected to convert to the new modular designs with the equipment they have on hand. However, because of existing shortages and the large number of equipment items that deployed units have left in Iraq or that need repair or replacement due to heavy use, units will not have the equipment needed for their new unit designs. For example, converted Guard units expect initially to be without some key equipment items that provide improved capabilities, such as unmanned aerial vehicles, single channel ground and airborne radio systems, and Javelin antitank missiles. Second, the Army has not planned funding to provide equipment based on the new conversion schedule. Instead, the Army plans to proceed with the original equipping schedule, which will not equip the Guard’s modular force until at least 2011. Army resourcing policy gives higher priority to units engaged in operations or preparing to deploy than those undergoing modular conversions. As a result, the requirements of ongoing operations will continue to deplete the Army National Guard’s equipment resources and will affect the pace at which equipment will be available for nondeployed units to transform to the modular design. In the meantime, modular Guard units are expected to continue using equipment that may be older than their active counterparts’ and will initially lack some key enablers, such as communications systems, which are the basis for the improved effectiveness of modular units. In addition to the equipment shortfalls and lack of comparability that are projected for near-term Guard conversions, the Army’s initial estimate of $15.6 billion through 2011 for converting Guard units to modular designs is incomplete and likely to grow for several reasons. First, the Army’s cost estimate was based on a less modern equipping plan than the design the Army tested for the new brigades. Second, the estimate does not include costs for 10 of the Guard’s support units, nor does it include nearly $1.4 billion that the Guard currently estimates is needed for military construction costs associated with the modular conversion of the Guard’s 40 support units. Third, current cost estimates assume that Guard equipment inventories will be at prewar levels and available for modular conversions. This, however, may not be a reasonable assumption because as discussed previously, Army National Guard units have left large amounts of equipment overseas, some of which will be retained indefinitely, and the Army has not provided plans for its replacement. The lack of complete equipping requirements and cost estimates for converting the Army National Guard to the new modular structure raises concerns about the affordability and effectiveness of this multibillion dollar restructuring effort. Furthermore, without more detailed data, the Congress may not have sufficient information to fully evaluate the adequacy of the Army’s funding requests for its modular force initiative. While the Army plans to transform into a rotational force, it has not yet finalized plans for how Army National Guard units will be equipped under its new model. The rotational force model is intended to provide units with a predictable cycle of increasing readiness for potential mobilization once every 6 years. As such, it involves a major change in the way the Army planned to use its reserve forces and has implications for the amount and types of equipment that Army National Guard units will need for training to improve their readiness as they progress through the cycle. Under the rotational force concept, rather than maintain units at less than full readiness, the Army would cycle Army National Guard units through phases of increasing readiness and provide increasing amounts of equipment to units as they move through three training phases and near readiness with the goal of predictable availability for potential deployment once in a 6-year period. While the Army has developed a general proposal to equip units according to the readiness requirements of each phase of the rotational force model, it has not yet detailed the types and quantities of items required in each phase. Under this proposal, the Army National Guard would have three types of equipment sets: baseline sets, training sets, and deployment sets. The baseline set would vary by unit type and assigned mission and the equipment it includes could be significantly reduced from the amount called for in the unit design, but plans call for it to provide at least the equipment Guard units would need for domestic missions, although this standard has not been defined. Training sets would include more of the equipment units will need to be ready for deployment, but units would share equipment that would be located at training sites throughout the country. The deployment set would include all equipment needed for deployment, including theater-specific equipment, items provided through operational needs statements, and equipment from Army prepositioned stocks. At the time of our report, the Army was still developing the proposals for what would be included in the three equipment sets and planned to publish the final requirements in December 2005. At present, it is not clear how the equipment requirements associated with supporting deployment under the new rotational readiness cycle will affect the types and quantities of items available for converting the Army National Guard to a modular force. Until the near-term requirements for the rotational model and long-term requirements for a modular force are fully defined and integrated, the cost of equipment needed to most efficiently implement the two initiatives will not be clear. Without firm decisions as to requirements for both the new modular structure and rotational deployment model and a plan that integrates requirements, the Army and Army National Guard are not in a position to develop complete cost estimates or to determine whether the modular and rotation initiatives will maintain the Guard’s readiness for all its missions, including warfighting, homeland security, and traditional state missions such as disaster response. In our report, we recommend that DOD develop and submit to the Congress a plan for the effective integration of the Army National Guard into the Army’s rotational force model and modular initiatives. We recommended that this plan include the equipment requirements, costs, timelines and funding strategy for converting Army National Guard units to the modular force and the extent to which the Army National Guard will have the types of equipment and equipment levels comparable to the active modular units. We further recommended that the plan include an analysis of the equipment the Army National Guard’s units will need for their missions in each phase of the rotational cycle and how the Army will manage implementation risks to modular forces if full funding is not provided on expected timelines. DOD agreed with our recommendation. In June 2005, DOD published its Strategy for Homeland Defense and Civil Support, which recognizes the National Guard’s critical role in these missions in both its federal and state capacities. However, the strategy does not detail what the Army National Guard’s role or requirements will be in implementing the strategy. DOD has not yet completed a review of the full range of the Army National Guard’s missions and the assets it will need to successfully execute them. In the absence of such requirements, National Guard units will continue to be structured and funded largely for their warfighting roles, and with the exception of certain specialized units, such as weapons of mass destruction civil support teams, Army National Guard forces are generally expected to perform civil support missions with either the resources supplied for their warfighting missions or equipment supplied by states. In its homeland defense and civil support strategy, DOD sets goals of (1) maximizing threat awareness; (2) deterring or defeating threats away from the U.S. homeland; (3) achieving mission assurance in performance of assigned duties under attack or after disruption; (4) supporting civil authorities in minimizing the damage and recovering from domestic chemical, biological, radiological, nuclear, or high-yield explosive mass casualty attacks; and (5) improving national and international capabilities for homeland defense and homeland security. The strategy recognizes the need to manage risks in the homeland defense and civil support mission areas given resource challenges the department faces in performing all its missions. Therefore, the strategy puts first priority on homeland defense missions that the department will lead, with second priority on ensuring the department’s ability to support civil authorities in the event of multiple mass casualties from chemical, biological, radiation, or nuclear incidents within the United States. To accomplish these goals, DOD has noted that it will have to integrate strategy, planning, and operational capabilities for homeland defense and civil support more fully into its processes. It plans to implement its strategy with dual-purpose forces that are simultaneously trained and equipped for warfighting and homeland missions. The strategy recognizes that National Guard forces not on federal active duty can respond quickly to perform homeland defense and homeland security within U.S. territory and are particularly well suited for civil support missions because of their locations across the nation and experience in supporting neighboring communities in times of crisis. Based on this strategy, U.S. Northern Command has been tasked to develop detailed contingency plans to identify the full range of forces and resources needed for the homeland missions DOD may lead or the civil support missions in which active or reserve forces should be prepared to assist federal or state authorities. However, it is not clear when this effort will be completed. DOD has taken some steps to develop additional information on the National Guard’s readiness for some of its domestic missions. In August 2005, the Under Secretary of Defense (Personnel and Readiness) directed the National Guard to include readiness assessments for both its Title 10 (federal missions) and Title 32 (state missions conducted with federal funding) in the department’s new readiness reporting system, the Defense Readiness Reporting System, which is scheduled for implementation in 2007. The new system is expected provide officials better visibility into unit readiness by reporting standardized metrics rather than general categories of readiness. The National Guard Bureau is also preparing a report for the Under Secretary of Defense (Personnel and Readiness) on concepts for reporting the Guard’s readiness for domestic missions and plans to prepare a detailed implementation plan by mid-January 2006. Until detailed concepts and implementation for these plans for domestic readiness reporting are developed and approved, it is not clear whether they will fully meet the recommendation in our prior report that DOD establish readiness standards and measures for the full range of the Guard’s homeland missions so that readiness for these missions can be systematically measured and accurately reported. As we reported in 2004, some states expressed concerns about the Army National Guard’s preparedness to undertake state missions, including supporting homeland security missions and disaster relief, given the increase in overseas deployments and the shortages of personnel and equipment among the remaining Guard units. Moreover, to meet new threats, some homeland security missions could require training and equipment, such as decontamination training and equipment that differ from that needed to support warfighting missions. Some Guard officials noted that states have limited budgets and that homeland security requirements compete with other needs, although the states have funded some homeland security activities, such as guarding critical infrastructure, and have purchased some equipment for homeland security purposes. To address some potential homeland security needs, DOD began establishing weapons of mass destruction civil support teams within the Army National Guard, as authorized by Presidential Directive and the Congress in fiscal year 1999. These teams, which are comprised of 22 full- time personnel, are maintained at high readiness levels and can respond rapidly to assist local officials in determining the nature of an attack, provide medical and technical advice, and help identify follow-on federal and state assets that might be needed. These teams are unique because they are federally funded and trained, but perform their missions under the command and control of the state governor. In the wake of Hurricane Katrina, the Louisiana civil support team provided command and control technology that was valuable in responding to this natural disaster. While strategies such as transferring large numbers of Army National Guard personnel and equipment from non-deploying units to deploying units and leaving Guard equipment overseas have met DOD’s immediate needs to support overseas operations, these strategies are not sustainable in the long term, especially as increasing numbers of Army National Guard personnel have already been deployed for as long as 2 years, recruiting challenges have arisen, and equipment challenges have increased. The current status of the Army’s equipment inventory is one symptom of the much larger problem of an outdated business model. Critical shortages of deployable equipment and the Army’s lack of accountability over the Army National Guard’s equipment retained overseas have created considerable uncertainty about what equipment the Guard will have available for training and domestic missions, and DOD has not developed detailed plans that include timeframes and identify resources for replacing equipment that has been heavily used or left overseas in the short term. Without replacement plans for equipment its units left overseas, Army National Guard units are unable to plan for training and equipping forces for future missions. Moreover, without a broader rethinking of the basis for Army National Guard equipment requirements that considers both overseas and homeland security requirements, preparedness will continue to decline and the Guard may not be well positioned to respond to future overseas or homeland missions or contingencies. As a result, we are recommending that DOD develop an equipping strategy that addresses how the Army National Guard will transition from short-term equipping measures to long-term solutions. DOD and the Army are implementing some initiatives to transform the Army National Guard so that it can better support a broader range of missions in light of the new security environment characterized by new threats, including global terrorism. These initiatives include establishing modular brigades; establishing a rotational model that seeks to target equipment to a unit’s expected mission; and clarifying the Guard’s role, training, and equipment needs for homeland security missions. However, supporting ongoing operations will continue to strain Army National Guard equipment inventories, and, under current plans, equipping Guard units for new modular designs will take several years. Further, it is not clear that these initiatives will result in a comprehensive and integrated strategy for ensuring that the Army National Guard is well prepared for overseas missions, homeland security needs, and state missions such as responding to natural disasters. We are therefore making recommendations to better integrate its initiatives. In this regard, we believe that the Congress and senior DOD leadership must be ready to play a key role in pressing the Army to provide more detailed plans for these initiatives and outlining the specific funding required to implement them in the most efficient manner. Mr. Chairman, this concludes my statement. I would be pleased to respond to any questions you or other Members of the Committee may have. For more information regarding this testimony, please contact Janet St. Laurent, Director, at (202) 512-4402. Individuals making key contributions to this testimony include Margaret Morgan, Assistant Director; Frank Cristinzio; Alissa Czyz; Curtis Groves; Nicole Harms; Tina Morgan Kirschbaum; Kim Mayo; Kenneth Patton; Jay Smale; and Suzanne Wren. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Since September 2001, the National Guard has experienced the largest activation of its members since World War II. Currently, over 30 percent of the Army forces now in Iraq are Army National Guard members, and Guard forces have also carried out various homeland security and large-scale disaster response roles. However, continued heavy use of the Guard forces has raised concerns about whether it can successfully perform and sustain both missions over time. In the short term, the National Guard is seeking additional funding for emergency equipment. GAO was asked to comment on (1) the changing role of the Army National Guard, (2) whether the Army National Guard has the equipment it needs to sustain federal and state missions, and (3) the extent to which DOD has strategies and plans to improve the Army National Guard's business model for the future. The heavy reliance on National Guard forces for overseas and homeland missions since September 2001 has resulted in readiness problems which suggest that the current business model for the Army National Guard is not sustainable over time. Therefore, the business model should be reexamined in light of the current and expected national security environment, homeland security needs, and fiscal challenges the nation faces in the 21st century. Under post-Cold War planning assumptions, the Army National Guard was organized as a strategic reserve to be used primarily in the later stages of a conflict after receiving additional personnel, equipment and training. Therefore, in peacetime Army National Guard units did not have all the equipment and personnel they would need to perform their wartime missions. However, over 70,000 Guard personnel are now deployed for federal missions, with thousands more activated to respond to recent natural disasters. To provide ready forces, the Guard transferred large numbers of personnel and equipment among units, thereby exacerbating existing personnel and equipment shortages of non-deployed units. As a result, the preparedness of non-deployed units for future missions is declining. The need to reexamine the business model for the Army National Guard is illustrated by growing equipment shortages. As of July 2005, the Army National Guard had transferred over 101,000 equipment items to units deploying overseas, exhausting its inventory of some critical items, such as radios and generators, in non-deployed units. Nondeployed Guard units now face significant equipment shortfalls because: (1) prior to 2001, most Army National Guard units were equipped with 65 to 79 percent of their required war-time items and (2) Guard units returning from overseas operations have left equipment, such as radios and trucks for follow-on forces. The Army National Guard estimates that its units left over 64,000 items valued at over $1.2 billion overseas. However, the Army cannot account for over half of these items and does not have a plan to replace them, as DOD policy requires. Nondeployed Guard units now have only about one-third of the equipment they need for their overseas missions, which hampers their ability to prepare for future missions and conduct domestic operations. Without a plan and funding strategy that addresses the Guard's equipment needs for all its missions, DOD and Congress do not have assurance that the Army has an affordable plan to improve the Guard's equipment readiness. DOD is taking some steps to adapt to the new security environment and balance the Army National Guard's overseas and homeland missions. For example, the Army has embarked on reorganization to a modular, rotational force. Also, DOD issued a strategy for homeland defense and civil support in June 2005. However, until DOD develops an equipping plan and funding strategy to implement its initiatives, Congress and DOD will not have assurance that these changes will create a new business model that can sustain the Army National Guard affordably and effectively for the full range of its future missions.
The following information discusses our continuing concerns about the long-term viability of the Single-Employer Fund and weaknesses in employee benefit plan audits and reporting. In addition, significant matters involving material weaknesses in internal controls are discussed in a separate section below. The Single-Employer Fund is able to meet its near-term benefit obligations because premium receipts presently exceed benefit payments and the Fund held investments having a market value of $7.2 billion and cash of $627 million at September 30, 1994. The Single-Employer Fund also reported a significant gain for the year, largely as a result of the effect of rising interest rates on the program’s benefit liabilities. However, the Fund’s unfunded $1.2 billion deficit, which represents a shortfall in assets needed to satisfy the Corporation’s benefit liabilities for terminated plans and for those plans considered likely to terminate, still constitutes a threat to the Fund’s long-term viability. In addition to the losses recorded in the financial statements and reflected in the unfunded deficit as of September 30, 1994, the Corporation disclosed $18 billion in estimated unfunded liabilities in single-employer plans that represent reasonably possible future losses. The Employee Retirement Income Security Act of 1974 (ERISA), which created the pension insurance program, established funding standards for insured plans but allowed benefits to become guaranteed before being funded by plan sponsors. The resulting timing difference has contributed, in large measure, to the Corporation’s exposure should a financially troubled plan sponsor be unable to meet its pension obligations. Moreover, the premium structure of the Single-Employer Fund has limited the Corporation’s ability to manage the exposure posed by underfunded plans because premiums paid by those plans have not fully covered the risks. In 1987, the Congress modified the Single-Employer Fund’s basic flat-rate premium structure by adding a supplemental variable rate premium which, for the first time, established a link between premiums and plan underfunding. The variable rate premium was based on the unfunded vested liability as calculated by the plan, after adjusting for a common interest rate, rather than the specific unfunded liability the Fund assumes should a plan actually terminate. However, as previously reported, the Single-Employer Fund often assumes a substantially larger liability upon termination than the last one calculated and reported by a plan. Also, the variable rate premium was subject to a maximum dollar amount that, when reached, effectively limited the risk-based linkage between premiums and plan underfunding. In addition, the Single-Employer Fund’s premium structure did not take into account the added risk of termination posed by underfunded plans sponsored by financially troubled companies. To address these concerns, the administration supported legislation proposed in the 103rd Congress to strengthen minimum funding standards by requiring sponsors to increase their contributions to underfunded defined benefit pension plans and phasing out the cap on variable rate premiums paid by underfunded plans. A modified version of this proposal, the Retirement Protection Act of 1994, became law on December 8, 1994, as part of legislation implementing the General Agreement on Tariffs and Trade (GATT). The Corporation anticipates that this legislation will significantly reduce underfunding in the plans that it insures and improve its financial condition. We have not assessed the long-term effects of this legislation on the Corporation’s deficit. However, the Corporation will need to monitor whether the legislation achieves the desired results. As we previously reported, weaknesses in the scope and quality of audits of employee benefit plans and the lack of plan reporting on internal controls reduce their effectiveness in safeguarding the interests of plan participants and the government. Under ERISA, the Department of Labor is responsible for establishing reporting and disclosure requirements and monitoring ongoing employee benefit plans, which include defined benefit pension plans insured by the Corporation. In past reviews of independent public accountants’ audits of employee benefit plans we found severe weaknesses in both the quality and scope of plan audits that made their reliability and usefulness questionable. ERISA allows plan administrators to limit the scope of plan audits by excluding plan assets held by certain regulated institutions from the scope of the auditor’s work. Thus, in cases where the scope is limited, the auditor provides little or no assurance about the existence, ownership, or value of assets that may be material to the financial condition of those plans. In addition, plan auditors are not required to check the accuracy and completeness of pension insurance premium filings applicable to insured plans or related premium payments made to the Corporation. Finally, while plan administrators are responsible for establishing sound internal controls and for complying with applicable laws and regulations, ERISA does not require that either plan administrators or plan auditors report to regulators and participants on the effectiveness of internal controls. In our April 1992 report (GAO/AFMD-92-14), we recommended that the Congress eliminate ERISA’s limited scope audit provision and require plan administrators and auditors to report on internal controls. Legislation was introduced late in the 103rd Congress that would have eliminated limited scope audits, required peer review of auditors conducting plan audits, and required plan administrators and auditors to report irregularities. This proposed legislation would not have required plan administrators and auditors to report on internal controls. The legislation was not enacted, and as of February 15, 1995, the 104th Congress had not taken up similar legislation. Our work disclosed that the Corporation has continued to make progress in improving internal controls affecting its financial reporting. However, as of September 30, 1994, material weaknesses continued to exist in the Corporation’s internal control structure in the three areas reported in our previous audits: weaknesses in financial systems and related internal controls, inadequate controls over the assessment of the Multiemployer Fund’s liability for future financial assistance, and inadequate controls over nonfinancial participant data. Through substantive audit procedures, we were able to satisfy ourselves that the weaknesses discussed below did not have a material effect on the fiscal year 1994 and 1993 financial statements of the Single-Employer and Multiemployer Funds. However, these weaknesses could result in misstatements in future financial statements and other financial information if not corrected by management. These weaknesses could also have an adverse impact on management decisions based, in whole or in part, on information whose accuracy is affected by the deficiencies. Unaudited financial information, including budget information, reported by the Corporation or used as a basis for management’s operational decisions also may contain inaccuracies resulting from these weaknesses. We reported for fiscal years 1992 and 1993 that weaknesses in financial systems and related internal controls presented an unacceptable risk to the Corporation that material misstatements might occur in the Corporation’s financial information and not be detected promptly by the Corporation. During fiscal year 1994, the Corporation continued to take steps to strengthen internal controls and to address weaknesses in financial and management information systems. For example, the Corporation began testing the data supporting multiemployer plan requests for financial assistance to ensure that they were valid and adequately supported prior to providing the assistance, updated certain computer operations procedures, and began detail system design for a new core financial system incorporating the standard general ledger. However, as of September 30, 1994, the Corporation had not implemented sufficient financial reporting controls to compensate fully for its lack of financial system integration. Deficiencies in automated management and financial information systems continued to inhibit management’s ability to promptly and accurately accumulate and summarize the information needed for internal and external reports. Overall, the Corporation’s cumbersome and nonintegrated processes for preparing the financial and other management information needed to support operations and financial/budgetary reporting were time-consuming and labor-intensive. These conditions were due, in part, to shortcomings in systems development and operations, including the absence of a proven systems development methodology. Thus, system and control weaknesses exposed the Corporation to a significant risk that the information could be materially misstated. These weaknesses were discussed in greater detail in our previous reports. During fiscal year 1994, the Corporation placed into operation a new computer system to determine the multiemployer plan universe and identify financially troubled plans as part of its assessment of the Multiemployer Fund’s liability for future financial assistance. However, the new system’s security controls were not designed to effectively restrict access to program source code, executable programs, and data tables. Additionally, during system implementation, the Corporation did not maintain evidence to document that key financial and nonfinancial plan data were accurately and completely transferred into the new multiemployer system. In addition, as reported for fiscal year 1993, the Corporation did not review or properly supervise the process for determining which plans should be included in the universe of multiemployer plans, or address the accuracy of certain data utilized in identifying and assessing financially troubled multiemployer plans. As previously reported, the Corporation’s controls did not ensure the accuracy of nonfinancial participant data entered into the Pension and Lump Sum (PLUS) system. In processing a terminated pension plan, the Corporation obtains nonfinancial participant data (such as social security numbers and dates of birth and employment) and uses the data, in conjunction with other information, to initially determine participants’ guaranteed benefits. After the nonfinancial data are obtained and initial benefits are determined, the data are entered into the PLUS system automated database, which is used to respond to participant inquiries and administer other benefit services. The Corporation uses these data annually to value its benefit liability for participants whose data have been entered in PLUS. Inaccurate nonfinancial data can reduce the precision of the Corporation’s fiscal year-end liability valuation and delay the final calculation of participant benefits. Weaknesses in controls over nonfinancial participant data and related recommendations are discussed in the Pension Benefit Guaranty Corporation Inspector General Report No. 94-6/23079-1 and as updated in its report No. 95-5/23083-1. In our report (GAO/AIMD-94-109), we concurred with the Inspector General’s recommendations, which are designed primarily to strengthen the verification of participant data and the input and edit controls over participant data maintained in PLUS. During fiscal year 1993, the Corporation initiated efforts designed to improve the accuracy of certain aspects of nonfinancial participant data entered into the PLUS system. However, control weaknesses involving these data continued to exist for fiscal year 1994 because the Corporation had not made significant progress in improving procedures for obtaining and documenting participant data in a timely manner. Also, weaknesses existed in the Corporation’s verifying and editing of the nonfinancial participant data entered and maintained in the Corporation’s records and its PLUS database. We previously made recommendations for addressing each of the material internal control weaknesses discussed in this report. These recommendations called for strengthening internal controls over systems development/modification and integration, financial reporting, multiemployer financial assistance, and participant data. While the Corporation made progress during fiscal year 1994 in addressing these recommendations, these efforts have not been completed. The Corporation has stated its commitment to fully addressing the weaknesses disclosed in these reports. In our opinion, the accompanying financial statements present fairly, in all material respects, the financial position of the Single-Employer and Multiemployer Funds administered by the Pension Benefit Guaranty Corporation as of September 30, 1994 and 1993, and the results of their operations and cash flows for the fiscal years then ended, in accordance with generally accepted accounting principles. However, misstatements may nevertheless occur in other financial information reported by the Corporation as a result of the internal control weaknesses previously described. Furthermore, the Corporation’s assessment of the Multiemployer Fund’s exposure to liabilities for future financial assistance is subject to material uncertainties, whose eventual effects cannot be reasonably determined at present. Many complex factors must be considered to identify multiemployer plans which are likely to require future assistance and to estimate the amount of such assistance. These factors, which include the financial condition of the plans and their multiple sponsors, will be affected by future events, most of which are beyond the Corporation’s control. We evaluated management’s assertion about the effectiveness of its internal controls designed to: safeguard assets against loss from unauthorized use or disposition; assure the execution of transactions in accordance with management authority and with laws and regulations that have a direct and material effect on the financial statements or that are listed by OMB and could have a material effect; and properly record, process, and summarize transactions to permit the preparation of reliable financial statements in accordance with generally accepted accounting principles and to maintain accountability for assets. In its 1994 report on internal controls, the Corporation’s management fairly stated that internal controls in effect on September 30, 1994, did not provide reasonable assurance that the Corporation properly recorded, processed, and summarized transactions to permit the preparation of financial statements in accordance with generally accepted accounting principles. However, controls in effect on September 30, 1994, provided reasonable assurance that assets were safeguarded against loss from unauthorized use or disposition and that transactions were executed in accordance with management’s authority and significant provisions of selected laws and regulations. Management made this assertion, which is included in appendix III, using the internal control and reporting criteria set forth in the Federal Managers’ Financial Integrity Act (FMFIA) and implementing guidance. In making this assertion, management considered the material weaknesses we found. While the Corporation made progress in addressing the reportable conditions identified and discussed with the Corporation during our fiscal year 1993 audit, our audit for fiscal year 1994 found that one of these reportable conditions continued to exist. Although this reportable condition is not considered a material weakness, it represents a significant deficiency in the design or operation of the Corporation’s internal controls and should be corrected. The Corporation’s controls over documentation supporting participant data maintained on PLUS were inadequate. In many cases, the Corporation was unable to provide documentation supporting the nonfinancial participant data entered on PLUS. In addition, the Corporation was not always able to demonstrate that procedures designed to support the accuracy of PLUS data were performed. Without proper supporting documentation, the Corporation may be unable to demonstrate the accuracy of PLUS data used to value the Corporation’s liability for terminated plans. This reportable condition and related recommendations are discussed further in the Pension Benefit Guaranty Corporation Inspector General Report No. 94-6/23079-1 and as updated in its report No. 95-5/23083-1. In our report (GAO/AIMD-94-109), we concurred with the Inspector General’s recommendations and recommended that the Corporation implement them. The Corporation agreed with the recommendations but its intended corrective actions had not progressed sufficiently to prevent the documentation weakness identified by the audit. In addition to the material weaknesses and reportable condition described in this report, we noted other less significant matters involving the Corporation’s internal control structure and its operations which we will be reporting separately to the Corporation’s management. Similarly, in addition to the material weakness and reportable condition described in Pension Benefit Guaranty Corporation Inspector General Report No. 95-5/23083-1, other less significant matters related to the Corporation’s internal control structure over its liability for future benefits on terminated plans will be reported separately to management by the Corporation’s Inspector General. Our tests of compliance with significant provisions of selected laws and regulations disclosed no material instances of noncompliance. Commenting on a draft of this report, the Corporation’s Executive Director agreed with our findings. The Executive Director’s written comments, provided in appendix IV, discuss the Corporation’s ongoing efforts to address the internal control weaknesses and respond to our previous recommendations. We plan to monitor the adequacy and effectiveness of these efforts as part of follow-up audits of the Corporation’s financial statements. The Corporation’s management is responsible for preparing the annual financial statements of the two funds in conformity with generally accepted accounting principles; establishing, maintaining, and assessing the internal control structure to provide reasonable assurance that the broad control objectives of FMFIA are met; and complying with applicable laws and regulations. We are responsible for obtaining reasonable assurance about whether (1) the Corporation’s financial statements are reliable (free of material misstatement and presented fairly in conformity with generally accepted accounting principles) and (2) management’s assertion about the effectiveness of internal controls is fairly stated in all material respects based upon the control criteria in GAO’s Standards for Internal Controls in the Federal Government required by the Federal Managers’ Financial Integrity Act. We are also responsible for testing compliance with significant provisions of selected laws and regulations and for performing limited procedures with respect to certain other information appearing in this financial statement. In order to fulfill these responsibilities, we examined, on a test basis, evidence supporting the amounts and disclosures in the financial statements of each of the two funds; assessed the accounting principles used and significant estimates made by the Corporation’s management; evaluated the overall presentation of the financial statements; obtained an understanding of the internal control structure related to safeguarding assets, compliance with laws and regulations including execution of transactions in accordance with budget authority, financial reporting, and assessed control risk; tested relevant internal controls and evaluated management’s assertion about the effectiveness of internal controls; tested compliance with selected provisions of the following laws and regulations: the Employee Retirement Income Security Act of 1974, as amended, and the Chief Financial Officers Act of 1990. The provisions selected for testing included, but were not limited to, those relating to benefit guarantees and financial assistance; the availability of, accounting for, and use of funds; the preparation and issuance of financial statements and management premiums and the assessment of related interest and penalties. We also conducted tests of compliance with the Anti-Deficiency Act that were limited to comparing the Corporation’s recorded payments to related authorized limitations on certain payments and apportionments. In fulfilling our responsibilities, we have relied on audit work performed by an independent public accounting firm under the direction of the Corporation’s Inspector General. The scope of this work, performed in conjunction with our audit, included an audit of the Corporation’s liabilities for future benefits on terminated plans and related losses, expenses, and cash flows, as well as related internal controls and compliance. We worked with the Inspector General to establish the scope of the work. We reviewed the work and concur with its scope, opinions, conclusions, and recommendations, which are presented in Pension Benefit Guaranty Corporation Inspector General Report No. 95-5/23083-1. We did not evaluate all internal controls relevant to operating objectives as broadly defined by FMFIA, such as those controls relevant to preparing statistical reports and ensuring efficient operations. We limited our internal control testing to accounting and other controls necessary to achieve the objectives outlined in our opinion on management’s assertion about the effectiveness of internal controls. Because of inherent limitations in any internal control structure, losses, noncompliance, or misstatements may nevertheless occur and not be detected. We also caution that projecting our evaluation of controls to future periods is subject to the risk that controls may become inadequate because of changes in conditions or the degree of compliance with controls may deteriorate. Our audit was conducted pursuant to provisions of 31 U.S.C. 9105, as amended, and in accordance with generally accepted government auditing standards. We believe our audit provides a reasonable basis for our opinions. Accounting and Information Management Division, Washington, D.C. Helen Desaulniers, Attorney The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (301) 258-4097 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a legislative requirement, GAO audited the Pension Benefit Guaranty Corporation's (PBGC) Single-Employer Fund and Multiemployer Fund for the fiscal years ended September 30, 1994 and 1993 and evaluated PBGC internal controls and compliance with laws and regulations. GAO found that: (1) PBGC financial statements were reliable in all material aspects; (2) weaknesses in PBGC internal controls did not have a material effect on the Corporation's financial statements; (3) PBGC internal controls did provide reasonable assurance that assets were safeguarded from material loss and transactions were executed in accordance with managerial and legal requirements; (4) there was no reportable noncompliance with laws and regulations; (5) while the Single-Employer Fund is able to meet its near-term benefit obligations, it has an unfunded deficit of $1.2 billion; and (6) PBGC has made progress in improving internal controls, but weaknesses remain in financial systems, controls over the assessment of the Multiemployer Fund's liability for future financial assistance are inadequate, and controls over nonfinancial participant data entered into the Pension and Lump Sum system are also inadequate.
Refundable tax credits (RTC) differ from other credits because a taxpayer is able to receive a refund check from IRS for the amount their credit exceeds their tax liability. For example, a person who owed $2,000 in taxes, but qualified for $3,000 in EITC would receive a $1,000 refund from IRS. A nonrefundable credit can be used to offset tax liability, but any excess of the credit over the tax liability is not refunded to the taxpayer. If, instead of claiming the EITC, that same person claimed $3,000 in a nonrefundable credit, the person would use $2,000 to reduce the tax liability to zero, but would not receive the remaining credit amount as a refund. According to the Congressional Budget Office (CBO), the number and costs associated with refundable tax credits have varied over the past 40 years. The first refundable credit, the EITC, was enacted in 1975. In 1998, additional RTCs became effective and by 2010 there were 11 different refundable tax credits. The cost of refundable tax credits peaked in 2008 at $238 billion, but declined over the next 4 years because of the expiration of several credits designed to provide temporary economic stimulus. Starting in 2014, the refundable Premium Tax Credit (PTC) was made available to some low-income households for the purchase of health insurance through newly created exchanges, as part of the Patient Protection and Affordable Care Act (PPACA). According to estimates from the Joint Committee on Taxation (JCT) and CBO, the cost of the PTC in its first year was $35 billion and will be about $110 billion by 2021. In 2015, there were five refundable credits in effect. Four of those were available to individuals—the EITC, ACTC, AOTC, and PTC. We issued a report last year assessing IRS’s implementation of PPACA requirements, including efforts to verify taxpayers’ PTC claims. This report focuses on the design and administration of the other three refundable tax credits available to individuals. Congress enacted the EITC in 1975 to offset the impact of Social Security taxes on low-income families and encourage low-income families to seek employment rather than public assistance. The credit was also meant to encourage economic growth in the face of a recession and rising food and energy prices. Since the credit’s enactment, it has been modified to provide larger refunds and differentiate between family size and structure. In fiscal year 2013, taxpayers received $68.1 billion in EITC; an average amount of $2,362 was distributed to about 29 million taxpayers. Beginning in 1979, the credit was also available as an advance credit. This meant that filers had the option to receive their predicted credit in smaller payments throughout the preceding year and reconcile the amount received with the amount they were actually eligible for upon filing their taxes. However, as we reported, the advanced payment option had a low take-up rate of 3 percent and high levels of noncompliance (as many as 80 percent of recipients did not comply with at least one of the program requirements), which led to its repeal in 2010. The EITC divides the eligible population into eight different groups based on the number of eligible children claimed by the filer and filing status. The basic structure of the credit remains the same for each group: the credit phases in as a percentage of earned income; upon reaching the maximum benefit, the credit plateaus; and when income reaches a designated point, the benefit begins to phase out as a percentage of income. The phase-in and phase-out rates, maximum benefit, and phase- out point all differ depending on filing status (such as single or married filing jointly) and the number of eligible children claimed. In order to claim the EITC, the tax filer must work and have earnings that do not exceed the phase-out income of the credit. Additional eligibility rules apply to any children that a tax filer claims for the purpose of calculating the credit. A qualifying child must meet certain age, relationship, and residency requirements. For example, the child must be younger than 19 (or 24 if a full-time student) and be a biological, adopted, or foster child, grandchild, niece/nephew, or sibling of the filer and live with the filer in the United States for at least 6 months of the year. Additionally, the child must have a valid Social Security number (SSN). The Improper Payments Information Act (IPIA) of 2002, as amended, requires federal agencies to review programs and activities that may be susceptible to significant improper payments and report on actions taken to reduce improper payments. In addition, the Office of Management and Budget (OMB) identifies high-priority (or high-risk) programs, one of which is EITC, for greater levels of oversight and review. For fiscal year 2015, IRS estimated that, $15.6 billion—or 23.8 percent—of EITC program payments were improper. The estimated improper payment rate for EITC has remained relatively unchanged since fiscal year 2003 (the first year IRS had to report estimates of these payments to Congress), but the amount of improper EITC payments increased from an estimated $10.5 billion in fiscal year 2003 to nearly $16 billion in fiscal year 2015 because of growth in the EITC program overall. The Additional Child Tax Credit (ACTC) is the refundable portion of the Child Tax Credit (CTC) and provides tax relief to low-income families with children. It also adds to the positive reward the EITC provides to those who work. The credit was initially created in 1997 by the Taxpayer Relief Act of 1997 as a nonrefundable child tax credit for most families, but in 2001 was expanded to include the current refundable ACTC for which more low-income families were eligible. Like the EITC, taxpayers can use the child tax credits to both offset tax liabilities (CTC) and receive a refund (ACTC); however, unlike the EITC, the nonrefundable CTC and the refundable ACTC amounts are entered separately on the Form 1040. In fiscal year 2013, taxpayers claimed $27.9 billion in ACTC and $27.2 billion in the nonrefundable CTC. Thus, the total revenue cost of the CTC and ACTC was $55.1 billion. This report will sometimes combine these credits (referring to them as CTC/ACTC) when their combined effect is at issue or to facilitate comparison with other RTCs that do not break out refundable and nonrefundable components. In general, the ACTC is claimed by those with lower tax liabilities and lower income than those that claim only the CTC. As reported by the SOI Division of the Internal Revenue Service, in 2012, 88 percent of the ACTC went to taxpayers with adjusted gross income below $40,000, while 17 percent of the CTC went to taxpayers below that income. Under current law, taxpayers can use the CTC to offset their tax liabilities by up to $1,000 per qualifying child. If the available CTC exceeds the filer’s tax liability, they may be able to receive a portion of the unused amount through the refundable ACTC. The ACTC phases in at 15 percent of every dollar in earnings above $3,000 up to the unused portion of the CTC amount. To claim the CTC or ACTC, taxpayers must have at least one qualifying child. The criteria for qualifying children are slightly different from that used to determine eligibility with the EITC. For the CTC and ACTC, the child must be under the age of 17 and a U.S. citizen, national, or resident, but taxpayers file using either a SSN or individual taxpayer identification number (ITIN). However, the relationship and residency requirements are similar for the ACTC and EITC. See figure 1 for a description of the credits and their requirements. The American Opportunity Tax Credit (AOTC) offsets certain higher education related expenses in an effort to lessen the financial burden of a college or professional degree for taxpayers and their dependents. The credit was created by the American Recovery and Reinvestment Act of 2009 as a modification of the nonrefundable Hope Credit and was made permanent in 2015 with the Protecting Americans from Tax Hikes (PATH) Act. In 2013, taxpayers claimed $17.8 billion in AOTC. The AOTC is designed as a partially refundable credit. The entire credit is worth up to $2,500 and a taxpayer can receive a refundable credit equal to 40 percent of their credit (for a maximum of $1,000). The size of the entire credit is determined by taking 100 percent of the first $2,000 in qualified education expenses and 25 percent of the next $2,000 in qualified expenses, which include tuition, required enrollment fees, and course materials. The value of the limit on expenses qualifying for the credit is not indexed for inflation. In order to claim the AOTC a tax filer or their dependent must meet certain requirements including adjusted gross income requirements. Furthermore, they must be in their first 4 years of enrollment and be at least a half-time student at an eligible post- secondary school. Taxpayers may only claim the AOTC for 4 years. More taxpayers claim the EITC than the other two refundable credits we examine in this report. The EITC is also the most expensive in terms of tax revenue forgone and refunds paid. In 2013, taxpayers claimed a total of $68.1 billion in EITC with $59 billion (87 percent) of this amount refunded; the total was $55.1 billion for the CTC and ACTC with $26.7 billion (48 percent) refunded as ACTC and a total of $17.8 billion in AOTC with $5 billion refunded (28 percent). There are several reasons why the ratio between the amount received as tax refunds and the amount used to offset tax liabilities varies from credit to credit including whether the credits are partially or fully refundable as well as income levels of the recipients. The number of taxpayers claiming the earned income credit increased 50 percent from1999 to 2013, and the total amount claimed after adjusting for inflation increased 60 percent, due in part to legislative changes which increased the number of people eligible for the credit and the amount they could claim. Over that same period, the ACTC also increased, with 20 times more taxpayers receiving the credit in 2013 than 1999. The AOTC did not see similar constant growth. See figures 2 and 3 for the number of taxpayers claiming credits and the amounts of credits received over time. As figure 4 shows, a greater share of EITC benefits goes to lower-income taxpayers. More than half (62 percent) of EITC benefits go to taxpayers making less than $20,000, with the largest share (48 percent) going to those making from $10,000 to less than $20,000. For the other credits, the benefits are spread more evenly among income groups. The CTC and AOTC do not have the same income restrictions as the EITC, so higher income taxpayers also benefit from those credits. For example, taxpayers making $100,000 or more receive 22 percent of the AOTC. Figure 4 also shows the percent of each credit claimed per adjusted gross income (AGI). Examined separately from the nonrefundable CTC, the ACTC also benefits lower income groups, but is less concentrated on the lowest income groups than the EITC, with 42 percent going to taxpayers making less than $20,000. (See figure 11 in appendix III for a comparison of CTC and ACTC benefits by AGI.) In addition to being lower income, EITC and ACTC claimants are more likely to be sole proprietors—persons who own unincorporated businesses by themselves—and to be heads of households than the general taxpayer population. As table 1 shows, 16 percent of taxpayers are sole proprietors, but they represent 25 percent of EITC and ACTC claimants. (Additionally, but not shown in the table, 29 percent of all EITC dollars go to sole proprietors.) EITC and ACTC are claimed mostly by heads of households. While people filing as head of household make up only 15 percent of the taxpayer population, they represent 56 percent of ACTC claimants and 47 percent of EITC claimants. AOTC claimants, on the other hand, are most likely to be married filing jointly (43 percent) or single (34 percent). Workers without qualifying children, or childless workers, make up 25 percent of EITC claimants, but receive 3 percent of benefits. Table 1 shows additional detail on how these characteristics differ across the three credits. IRS relies on pre-refund controls and filters to detect, prevent, and correct errors, a selection of which is shown in figure 5. Before accepting a return, IRS checks it for completeness and attempts to verify the taxpayer’s identity and credit eligibility. A series of systems use IRS and other government data to check whether returns meet certain eligibility requirements (like whether earned income falls within EITC income limits) and include the required forms (such as a Schedule EIC). IRS can use its math error authority (MEA) to correct or request information on electronic returns with these errors. During return processing, IRS runs returns through additional systems to screen for fraud and errors. One system, IRS’s Electronic Fraud Detection System (EFDS), screens returns for fraud including possible identity theft. If flagged, IRS stops processing the return and sends a letter asking the taxpayer to confirm his or her identity. Another system— the Dependent Database (DDb)—incorporates IRS and other government data, such as the National Prisoner File or child custody information from the Department of Health and Human Services, along with rules and scoring models to identify questionable tax returns and further detect identity theft. Once the suspicious tax returns are identified, the DDb assigns a score to each tax return. Based in large part on these scores, as well as available resources, IRS selects a portion of suspicious returns for correspondence audits, which are audits conducted through the mail. IRS conducts most of its EITC audits (about 80 percent) and ACTC audits (about 64 percent) prior to issuing refunds. In these pre-refund audits, IRS freezes the refund and sends a letter to the taxpayer requesting documentation such as birth certificates or school or medical records to verify eligibility. During the audit process, IRS will also freeze and examine other refundable credits claimed on the return. See table 2 for a description of how many audits IRS selects specifically for each credit and the total amount audited including returns selected for other reasons. IRS’s compliance activities continue after it issues refunds. In addition to post-refund audits, IRS also conducts the automated underreporter program (AUR) which matches income data reported on a tax return with third-party information about income and expenses provided to IRS by employers or financial institutions. In 2014, this document matching review process included just over 1 million EITC returns and IRS recommended $1.5 billion in additional tax. Lack of third party data complicates IRS’s ability to administer these credits, but such data are not easy to identify. According to IRS, the data it uses should be complete and accurate enough to allow IRS to select returns with the highest potential for change without placing an undue burden on taxpayers. IRS reported that it evaluated several different databases to determine if they were reliable enough to be used under MEA to make changes to tax returns without going through the audit process. For example, IRS tested the Federal Case Registry (FCR), a national database that aids the administration and enforcement of child support laws. IRS determined that it could not identify errors related to qualifying children from this database with enough accuracy under its standards. In addition, IRS participated in a project led by Treasury and conducted by the Urban Institute that assessed the overall usefulness of state-level benefit data to help validate EITC eligibility. The study concluded, based on a number of issues, including different data collection practices across states that this data would not improve the administration of the EITC. Without data reliable enough to be used under MEA, IRS generally conducts a correspondence audit to verify that a taxpayer meets the requirements for income and that their children meet both residency and relationship requirements. Audits are more costly than issuing MEA notices and they can be lengthy. For example, in 2014 it cost IRS on average $.21 to process an electronic return (including issuing math error notices), while an EITC audit cost $410.74. However, as mentioned above, cost savings should be weighed against other goals such as fairness and burden on taxpayers. More EITC claimants make income errors than qualifying children errors, but the dollar value of the errors due to noncompliance with qualifying children requirements is larger than the dollar value of the income errors. Verifying eligibility with residency and relationship requirements can be complicated and subject to interpretation. IRS offers training to tax examiners on various types of documentation that could be used to verify EITC requirements and tax examiners are allowed to use their judgment to evaluate whether residency or relationships requirements are satisfied. This lack of available, accurate, and complete third party data complicates IRS’s efforts to verify qualifying children eligibility requirements, increasing IRS’s administrative costs and taxpayer burden. Filing and refund timelines also complicate IRS’s ability to administer these credits. IRS states on its website that more than 90 percent of refunds are issued within 21 days. It is important that IRS issues refunds on time because when it is late, taxpayers’ refunds are delayed, and IRS is required to pay interest on delayed refunds. However, it is also important to allow enough time to ensure refunds are accurate and issued to the correct individuals. The IRS strategy with respect to improper payments is to intervene early to ensure compliance through outreach and education efforts as well as various compliance programs. Even so, in order to meet timeliness goals, IRS issues most refunds months before receiving and matching information returns, such as the W-2 to tax returns, rather than holding refunds until all compliance checks can be completed. As a result, IRS ends up trying to recover fraudulent refunds and unpaid taxes after matching information and pursuing discrepancies. We previously reported that, in 2010, it took IRS over a year on average to notify taxpayers of matching discrepancies, increasing taxpayer burden. In August 2014, we recommended that IRS estimate the costs and benefits of accelerating W-2 deadlines and identify options to implement pre-refund matching using W-2 data as a method to combat the billions of dollars lost to identity theft refund fraud, allowing the agency more opportunity to match employers’ and taxpayers’ information. In response to our recommendation, IRS conducted such a study and presented the results to Congress in 2015. In December 2015, Congress moved the W-2 filing deadlines to January 31 and required IRS to take additional time to review refund claims based on the EITC and the ACTC. As such, most individual taxpayers who claim either credit would not receive a refund prior to February 15. JCT estimated that the entire provision will result in $779 million in revenue from fiscal years 2016 to 2025. According to IRS officials, they are evaluating how to implement these changes and the impact on the administration of the credits. The complexity of eligibility requirements, besides being a major driver of noncompliance and complicating IRS’s ability to administer these credits, are also a major source of taxpayer burden. For example, for the EITC and ACTC, each child must meet certain age, residency and relationship tests. However, given complicated family relationships, determining whether children meet these eligibility requirements is not always clear- cut, nor easily understood by taxpayers. This is especially true when filers share responsibility for the child with parents, former spouses, and other relatives or caretakers, as the following figure illustrates. Examples of Complications that Can Arise when Applying the EITC Eligibility Rules Scenario 1: A woman separated from and stopped living with her husband in January of last year, but they are still married. She has custody of their children. She is likely eligible for the Earned Income Tax Credit (EITC) because she can file using the head of household status. However…..If the couple separated in November, she is likely not eligible for the EITC because she was not living apart from her husband for the last 6 months of the year and therefore cannot claim the head of household filing status. Scenario 2: An 18-year old woman and her daughter moved home to her parents’ house in November of last year. She is likely eligible for the EITC because she was supporting herself and her child. However…..If she always lived at her parents’ house, she is likely NOT eligible for the EITC because she was a dependent of her parents for the full tax year and therefore cannot claim the EITC on her own behalf. Scenario 3: A young man lives with and supports his girlfriend and her two kids. He and the mom used to be married, got divorced, and are now back together. He is likely eligible for the EITC because the children are his stepchildren and therefore meet the relationship requirement. However…If he and the mom were never married, he is likely NOT eligible for the EITC because the children are not related to him. The complexity of eligibility requirements, besides being a major driver of noncompliance and complicating IRS’s ability to administer these credits, is also a major source of taxpayer burden. For example, for the EITC and ACTC, each child must meet certain age, residency, and relationship tests. However, given complicated family relationships, determining whether children meet these eligibility requirements is not always clear- cut, nor easily understood by taxpayers. This is especially true when filers share responsibility for the child with parents, former spouses, and other relatives or caretakers, as the following textbox illustrates. Differences in eligibility requirements among the RTCs also contribute to complexity. In 2013, according to our analysis of IRS data, 11.4 million taxpayers claimed both the EITC and ACTC while another 5.3 million claimed the EITC, ACTC, and CTC, navigating multiple sets of requirements for income levels and child qualifications. We have also previously reported that the complexity of education credits like the AOTC means that some taxpayers do not make optimal choices about which education credits to claim. Faced with these complexities, many potential credit recipients seek help filing their tax returns, typically from paid preparers. Fifty-four percent of taxpayers claiming the EITC use paid preparers to help them navigate these requirements and complete the tax forms. These preparers provide a service that relieves taxpayers of costs in terms of their own time, resources, and anxiety about the accuracy of their returns. However, the preparer costs may be an additional burden if their fees are excessive or their advice inaccurate. As we previously reported, the fees charged for tax preparation services vary widely and may not always be explicitly stated upfront. As noted later in this report, unenrolled paid preparers—those generally not subject to IRS regulation—have higher error rates for the RTCs than taxpayers who choose to prepare their own returns. Taxpayers who choose to prepare their own returns file a tax return (some version of Form1040) along with additional forms, such as the Earned Income Credit schedule, Schedule 8812 for the CTC, or Form 8863 to claim education credits. To determine both eligibility and the amount of the credit, taxpayers can consult separate worksheets included with the forms. These can be long and detailed; Publication 596, which includes instructions and worksheets for claiming the EITC, is 37 pages long. IRS reported that most taxpayers who self-prepare use tax software when they file their returns and that, on average, the burden for RTC returns was about 11 hours per return in 2013. In addition to the costs of filing a claim for a credit, complying with IRS enforcement activities also contributes to taxpayer burden. In tax year 2013, IRS rejected over 2 million electronically filed EITC claims. IRS rejects these claims for a variety of reasons, such as missing forms, incorrect SSNs, or if another taxpayer has claimed the same child. Taxpayers can handle some of these issues, such as a mistyped SSN, by correcting their electronic returns. IRS reported that a majority (74.4 percent) of rejected returns are corrected and resubmitted electronically. IRS also reported that this process takes taxpayers on average half an hour—shorter than if they had to make this correction after filing. Other issues impose a larger burden. To claim a child that someone else has already claimed for the EITC, taxpayers can fill out and resubmit their return on paper and then face a possible audit with its associated costs. When processing the tax return, if IRS identifies potential noncompliance with eligibility requirements it can initiate a correspondence audit and send a letter to the taxpayer requesting documentation showing that the taxpayer meets those eligibility requirements. For taxpayers overall, IRS estimated that participating in a correspondence exam takes taxpayers 30 hours, which, combined with any out of pocket costs, is valued on average at $500. In 2015, IRS conducted just under 446,000 EITC exams, which means that approximately 1.6 percent of people filing a EITC claim were audited compared to about .9 percent for individual taxpayers overall in 2014. However, this compliance burden may be larger for some populations. For example, according to attorneys who represent low-income tax filers, these filers may have difficulty proving they meet residency and relationship requirements due in part to language barriers, limited computer literacy, and complicated family structures. To prove a residency requirement—that a child lived with the taxpayer in the United States for more than half the year—taxpayers may submit a document with their address, name, and the child’s name that could include school or medical records or statements on letterhead from a child-care provider, employer, or doctor. Again, according to low-income tax clinic representatives, these can be hard to cobble together for families with limited English proficiency or who move multiple times throughout the year. To prove a relationship requirement, unless they are claiming their son or daughter, taxpayers must submit birth certificates proving the relationship. For example, to claim a great-grandchild, the taxpayer must submit the child’s, grandchild’s, and great-grandchild’s birth certificates. The names must be on the birth certificates, or they will also need to submit another type of document such as a court decree or paternity test. For multigenerational families or situations in which another relative is taking care of the child, locating and assembling the necessary chain of birth certificates can be a challenge. If IRS determines that a taxpayer improperly claimed the EITC due to reckless or intentional disregard of rules or regulations, it may ban the taxpayer from claiming the credit for 2 years—even if the taxpayer qualifies for it. However, the National Taxpayer Advocate reported that IRS’s procedures automatically imposed the ban on taxpayers who did not respond to IRS’s notices and put the burden of proof onto taxpayers to show they should not have received the ban. According to IRS officials, in response to these concerns, IRS implemented new training programs, strengthened managerial oversight, and added protections for taxpayers to ensure they only systematically issue bans to taxpayers with a history of noncompliance. In 2015, IRS issued fewer 2-year bans than in previous years. Despite the compliance burden and costs associated with these RTCs, the burden may be lower than benefits from spending programs. For example, tax credit recipients can self-certify, they do not need to meet with caseworkers, nor submit up-front documentation as is required with some direct service antipoverty programs such as Supplemental Security Income (SSI) or Temporary Assistance for Needy Families (TANF). The simplified up-front process may contribute to higher participation rates. The EITC participation rate — over 85 percent as reported by Treasury— is in the high end of the range for antipoverty programs. GAO previously reported that the SSI participation rate in 2011 was about 67 percent of adults who were estimated to be eligible, while the TANF participation rate was about 34 percent. IRS does not estimate participation rates for AOTC or ACTC. Sustained annual budget reductions at IRS have heightened the importance of determining how best to allocate declining resources to ensure it can still meet agency-wide strategic goals of increasing taxpayer compliance, using resources more efficiently, and minimizing taxpayer burden. In an effort to improve efficiency, IRS consolidated administration of the EITC, ACTC, and AOTC across several different offices within the Wage & Investment Division. Return Integrity and Compliance Services (RICS) oversees the division’s audit functions. Within RICS, Refundable Credits Policy and Program Management (RCPPM) is responsible for refundable credit policy, enforcement, and establishing filters for computerized selection of returns for audit. Refundable Credits Examination Operations is responsible for conducting the audits, oversight and training of personnel, maintaining the phone and mail operations, and addressing personnel and union issues. Although these offices work collaboratively to formulate and implement policies and process workload, they lack a comprehensive strategy for RTC compliance efforts. IRS is working on an operational strategy to document all current EITC compliance efforts and identify and evaluate potential new solutions to address improper payments. However, this review only focuses on efforts to improve EITC compliance and does not include the other refundable credits. The lack of a comprehensive strategy that takes into account all ongoing compliance efforts for the three RTCs (the EITC, ACTC, and AOTC) presents several potential challenges, as discussed below. IRS measures compliance by estimating an aggregate error rate for the EITC and error rates for certain subcategories of EITC claimants (e.g., claimants grouped by type of tax preparer). IRS uses National Research Program (NRP) data for these estimates because it employs a representative sample that can be used to estimate error rates for the universe of taxpayers. In addition to measuring compliance with the tax code, the error rates help IRS understand taxpayer behavior; information IRS could use to develop compliance strategies and allocate resources. According to IRS, it estimates net overclaim percentages (net misreported amount divided by the amount reported) for the RTCs. IRS reported it uses these overclaim percentages to identify areas for potential future research. However, IRS does not report the frequency of these errors or amounts claimed in error across credits, which makes it difficult to compare noncompliance across the credits. Analyses which incorporate relative frequencies and the magnitudes of these errors could be used by IRS to inform resource allocation decisions. In order to show how IRS can use these error rates to inform its compliance strategy and resource allocations, we estimated aggregate error rates for the EITC, the AOTC, and the CTC/ACTC, which combines the refundable ACTC with its nonrefundable counterpart the CTC. Estimating the CTC/ACTC makes it possible to compare error rates for this credit with those for the EITC and AOTC because these credits include the refunded amounts as well as the amounts used to offset tax liabilities. The CTC/ACTC error rate estimate will exclude any adjustments due to dollars shifted between refundable ACTC and nonrefundable CTC. For example, a taxpayer who understates her income may claim a higher ACTC, but if IRS adjusts the income, the effect could be that the refundable ACTC decreases and the nonrefundable CTC increases. This adjustment does not necessarily result in saved dollars or revenue protected, but rather a shifting of dollars from a refund to a lower tax liability, depending where the taxpayer is in relation to the income phase-out rate. Without making these adjustments for the CTC/ACTC estimates, the error rates for the credits would not be comparable. The relative frequency of error rates by different types of credit could be useful information for determining the allocation of enforcement resources. As figure 6 shows, the estimated average error rates for overclaims and underclaims from 2009 to 2011 can vary considerably by credit type. The EITC and AOTC have similar average error rates for overclaims of 29 percent and 25 percent, respectively, but the CTC/ACTC error rate for overclaims is 12 percent—less than half of the other two credits. Although they are much smaller, the underclaim rates vary in a similar way, with the 4 percent AOTC error rate being twice as large as the CTC/ACTC rate. The relative frequency of errors by type of credit may help IRS better focus its limited resources. In addition to the error rates, information about the amount estimated to be claimed in error would also be useful for resource allocation. From 2009 to 2011, the average amount overclaimed for the RTCs also had considerable variation by credit type. The average yearly amount overclaimed for the EITC was $18.1 billion, for the CTC/ACTC was $6.4 billion, and for the AOTC was $5.0 billion. (See appendix II for more details about credit amounts erroneously claimed.) Combining these dollar amounts with the error rate information can further inform resource allocation. For example, although the AOTC had an overclaim rate of 25 percent—nearly as large as the EITC’s 29 percent rate—the amount overclaimed was only about one-third of the EITC’s amount. Both the rate and the amount—among other considerations like effects on equity and compliance burden—would factor into a plan for allocating enforcement resources. The lack of a comprehensive compliance strategy that includes information on error rates by type of credit and categories of taxpayers could limit IRS’s ability to recognize gaps in its enforcement coverage and compliance efforts. For example, IRS previously reported in its EITC compliance studies that unenrolled paid preparers have higher error rates than other preparer types. Our analysis of NRP data, discussed later in this report, showed that this pattern of noncompliance by type of preparer is also true for the ACTC and AOTC. With this information, a compliance strategy can be devised that takes into account these other credits. Additional information could also help IRS better plan resource allocations among the RTCs. IRS devotes a large percentage of its RTC enforcement resources to the EITC, but has not made clear the basis for this allocation. As previously noted, in 2014, IRS selected 87 percent (or 435,000) of its RTC audits based on issues related to the EITC and 6 percent (or 31,000) of its audits based on issues related to the ACTC. The returns that IRS selects for EITC audit may also be audited for other RTC issues. For example, in addition to the 31,000 returns selected for ACTC audits in 2014, another 382,000 returns were audited for the ACTC even though they were selected for another RTC issue—almost always an EITC issue. This approach allows IRS to pick up a lot of potentially erroneous ACTC claims, which IRS can then also freeze as part of the EITC audit. However, this approach raises several concerns about whether IRS is achieving an optimal resource allocation: (1) the very low audit coverage of the approximately 5 million claimants who claim the ACTC but not the EITC could risk a reduction in voluntary compliance, (2) using EITC tax returns as a selection mechanism for ACTC audits may not be the best way to identify ACTC noncompliance, and (3) questions about equity in audit selection for ACTC arise because EITC claimants are generally lower-income than claimants for other credits. Weighing these concerns and other factors like administrative costs could help IRS create a comprehensive strategy for the RTCs that could provide a framework for IRS to make decisions about how to allocate resources and to communicate what criteria it uses to make these allocations. Although IRS lacks a comprehensive RTC strategy, it has been able to identify some compliance trends for other credits besides the EITC. IRS officials observed an increase in the ACTC overclaim percentage from 2009 to 2011. According to IRS, confirming and understanding the nature of that potential increase will require more research. To that end, IRS plans to begin work in 2016 on an ACTC compliance study similar in nature to the recent EITC 2006-2008 Compliance Study. Officials could not provide a start date or timeline for completion and said the rate at which this work progresses will depend on competing priorities given limited budget and staff. However, they stated that the CTC/ACTC compliance study remains a high priority project. Previously, we reported that IRS could identify ways to reduce taxpayer noncompliance through better use of NRP data and that ACTC was one area where further research could provide information on how to address noncompliance. Another challenge related to the lack of a comprehensive plan is that certain IRS performance indicators may be difficult to interpret. IRS relies on the no-change rate and default rates to make resource allocation decisions. IRS closes audits as defaults when the taxpayer (1) does not respond to any IRS notice or (2) responds to some notices but not the last one asking for agreement with a recommended additional tax assessment. IRS officials stated that they believe that taxpayers who default are generally noncompliant because taxpayers selected for audit receive multiple notices and the refunds can equal several thousand dollars, giving them the information and incentive to engage with IRS. Therefore, when there is a high default and a low no-change rate, IRS officials said that they interpret that as an indicator that the taxpayers selected for audit were not entitled to the credit claimed. Even so, it can be difficult to interpret a low no-change rate when it includes defaults. As we previously reported, in fiscal years 2009 through 2013, the no-change rate ranged from 11 percent to 21 percent for all closed correspondence audits but rose to 28 percent to 45 percent when IRS had contact with the taxpayers throughout the audit and did not close the audit through a default. Without knowing the reasons why taxpayers default, it is difficult to know how to interpret the no-change rate. To the extent that some of the taxpayers who default are compliant, the reported no-change rate underestimates what would be the actual no-change rate. The Taxpayer Advocate has raised concerns that taxpayers may not understand the notices, which could be contributing to the low response rate. The difficulty interpreting the no-change rates and default rates can make the results of IRS’s assessments of its programs less certain. According to IRS, two of the most effective and reliable enforcement programs for addressing RTC compliance and reducing improper payments are post- refund document matching and audits. IRS stated that it protects over $3 billion dollars in revenue based on these enforcement activities, but the default rate is over 50 percent. The no-change rate indicates that the overwhelming majority of the cases IRS selects have mistakes that require an adjustment. However, because the defaults are included among the no-change audits and the default rate is high, it calls into question the extent to which the cases being selected are actually noncompliant. Table 3 shows the number of returns IRS identifies through these various enforcement activities, the no-change rate, and the default rate. The no-change rates for these enforcement activities are very low but the associated default rates are high. This disproportion can make the no- change rate misleading as an indicator of noncompliance. For example, if 10 percent of the defaulting taxpayers in the case of document matching were actually compliant, the no-change rate would double to about 14 percent, and if 50 percent were compliant, the no-change rate would increase to about 40 percent. These figures could call into question whether IRS is getting useful information out of no-change rates when the default rate is so high and little is known about the compliance characteristics of defaulting taxpayers. Another challenge that IRS faces is that the set of indicators that it uses to make resource allocation decisions does not include indicators for equity and compliance burden. When evaluating enforcement strategies, such as developing new screening filters for exam selection, IRS officials look at filters that produce a low response rate and a low no-change rate. For example, at the 2015 annual strategy meeting, IRS managers recommended increasing the number of Disabled Qualifying Child (DQC) cases that they plan to work each year based on a high default rate (70 percent compared to a 54 percent default rate for other programs) and a low no-change rate of between 3 and 6 percent. Based on these high default and low no-change rates, program managers recommended increasing the number of cases that they plan to work or replacing cases waiting to be worked with DQC cases as a way to reduce their backlog of unclosed cases. The managers did not evaluate the recommendation on the basis of equity or compliance burden. In addition, IRS did not provide any reliable indicator of compliance burden associated with any of the refundable tax credits that we reviewed. According to IRS officials, reviewing taxpayers’ responses is resource intensive, and by reducing that process, IRS could perform more audits elsewhere. However, as discussed above, the no-change rate on which they based their decision may be an unreliable estimate of actual taxpayer noncompliance when, as the officials said, they do not know why taxpayers did not respond to notices. A more comprehensive strategy that documents RTC compliance efforts could help IRS officials determine whether their current performance indicators are giving them reliable information and their current allocation of resources is optimal, and if not, what adjustments are needed. IRS officials could also use this review as an opportunity to ensure program managers have a balanced suite of performance measures which adequately address all priority goals. For example, the desire to reduce inventory or concentrate resources on efforts with the lowest no-change rate could take precedence over undue taxpayer burden. IRS faces administrative and compliance challenges which also complicate the administration of RTCs. Due in part to long-standing concerns about the EITC improper payment rate, EITC examinations account for nearly 39 percent of all individual income tax return audits each year. However, the EITC only accounts for about 5 percent of the tax gap in tax year 2006 (the most recent estimate available). In a 2013 report, we demonstrated that a hypothetical shift of about $124 million in enforcement resources among different types of audits could have increased direct revenue by $1 billion over the $5.5 billion per year IRS actually collected in 2013. An agency-wide approach that incorporates ROI calculations could help IRS allocate enforcement resources more efficiently not just among the credits, but also across EITC and non-EITC returns. We previously recommended that IRS develop a long-term strategy and use actual ROI calculations as part of resource allocation decisions to help it operate more effectively and efficiently in an environment of budget uncertainty. In response to our recommendation, IRS has begun a project to develop ROI measures that could be used for resource allocation decisions. We have previously reported that while IRS publishes information regarding the coverage rates and additional taxes assessed through various programs, relatively little information is available on how much revenue is actually collected as a result of these enforcement activities. Additional analysis of available RTC collections data could also inform resource-allocation decisions. Currently, IRS reviews the amount of revenue collected annually based on EITC post-refund enforcement activities, but it could not verify the reliability of that data during the timeframe of the GAO audit. Such data could be used to calculate a collections rate—the percentage of tax amounts assessed that is actually collected. A reliable collections rate could be used as an additional data point for informing and assessing allocation decisions. According to federal internal control standards, managers need accurate and complete information to help ensure efficient and effective use of resources in making decisions. Recognizing that not all recommended taxes would be collected or collected soon after the audit, IRS could still use available data to compute a collections rate for post-refund enforcement activities and conduct further analyses of assessments from post-refund audits and document-matching reviews. IRS officials said they have conducted such studies in the past, and they were resource- intensive. Nonetheless, given that collections data are needed for both the detailed analyses described above, as well as for an agency-wide analysis of the relative costs and results of various enforcement activities to inform resource-allocation decisions, there may be opportunities to coordinate the data collection efforts to reduce overall costs. In addition to collections, an agency-wide approach could help IRS develop a strategy for addressing Schedule C income misreporting—a long-time challenge for IRS—and a key driver of EITC noncompliance. According to IRS, income misreporting is the most commonly made error on returns claiming the EITC, occurring on about 67 percent of returns with overclaims. Self-employment income misreporting represents the largest share of overclaims (15 to 23 percent) while wage income misreporting represents the smallest (3 to 6 percent). In the claimant population as a whole, 76 percent of taxpayers earn only wage income, while the remaining 24 percent earn at least some self-employment income. As shown in figure 7, error rates in terms of overclaimed amounts of credit were largest for Schedule C filers for the EITC and AOTC. The error rate for Schedule C filers claiming the CTC/ACTC was not statistically different from the error rate for filers without a Schedule C. Although Schedule C income misreporting is larger for EITC claimants, IRS’s enforcement strategies are more likely to be effective with wage income misreporting than Schedule C income misreporting. According to IRS, it addresses income misreporting through (1) DDb filters designed to identify taxpayers making up a fake business; (2) the questionable refund program designed to identify and follow-up with taxpayers lying about where and how long they worked; and (3) the post-refund document matching program that matches returns with other information such as W- 2s. While these methods may catch some income misreporting by the self-employed, they rely to a great extent on the types of third party income and employment documentation that are likely to be available for wage earners but are largely absent for the self-employed. According to IRS officials, starting in tax year 2011, IRS started matching other information such as Form 1099K Merchant Card payments to tax returns to verify self-employment income. IRS also addresses EITC noncompliance through correspondence audits but Schedule C income issues are more conducive to field audits than correspondence audits. However, EITC Schedule C returns are less likely to be selected for field audits because the dollar amounts do not meet IRS thresholds. Addressing Schedule C income misreporting has been a long-standing challenge for IRS. In 2009, we reported that according to IRS, sole proprietor income was responsible for about 20 percent of the tax gap. A key reason for this misreporting is well known. Unlike wage and some investment income, sole proprietors’ income is not subject to withholding and only a portion is subject to information reporting to IRS by third parties. We have made several recommendations over the years to address this issue. In 2007, we recommended that Treasury’s tax gap strategy should cover sole proprietor compliance in detail while coordinating it with broader tax gap reduction efforts. As of March 2015, no executive action has been taken to address this recommendation, nor has Treasury provided us with plans to do so. We maintain that without taking these steps, Treasury has less assurance that IRS is using resources efficiently to promote sole proprietor compliance. In 2009, we recommended IRS develop a better understanding of sole proprietor noncompliance, including sole proprietors improperly claiming business losses. As of November 2015, IRS partially addressed this recommendation by researching sole proprietor noncompliance and focusing on those who improperly claim business losses. The results of this research will take several years to compile but IRS plans to provide at least rough estimates of disallowed losses in 2016. This research, when completed, could help IRS to identify noncompliant sole proprietor issues and address one of the drivers of EITC noncompliance. IRS does not track the number of returns erroneously claiming the ACTC and AOTC identified through screening activities. (IRS currently tracks this information for the EITC). As we noted earlier, according to federal internal control standards, managers need accurate and complete information to help ensure efficient and effective use of resources in making decisions. IRS conducts various activities to identify and prevent the payment of an erroneous refund, such as screening returns for obvious mistakes and omissions. IRS officials said this information would help them deepen their understanding of common errors made by taxpayers claiming these credits and the insights could then be used to develop strategies to educate taxpayers. IRS officials reported that they are working to figure out how to extract these data for the ACTC and AOTC so they can begin to track the data and use them to refine their overall compliance strategy. Although IRS said that it understands the potential usefulness of these data, it has not yet developed a plan that includes such desirable features as timing goals and resource requirements and a way to develop indicators from the data that would be most effective for understanding and increasing compliance. IRS may also be missing an opportunity to use information from the Department of Education (Education) to detect and correct AOTC errors. Education collects in its Postsecondary Education Participants System (PEPS) a list of institutions and their employer identification numbers (EIN), which would indicate whether the institution the student attends is eligible under the AOTC. The PATH Act of 2015 requires taxpayers claiming the AOTC to report the EIN for the education institutions to which they made payments. There is some evidence that PEPS may be a useful tool for detecting noncompliance. In a review of the AOTC, the Treasury Inspector General for Tax Administration (TIGTA) used PEPS data and identified 1.6 million taxpayers claiming the AOTC for an ineligible institution in 2012. TIGTA recommended that IRS coordinate with Education to determine whether IRS could use Education data to verify the eligibility of educational institutions claimed on tax returns. While IRS agreed that these PEPS data could identify potentially erroneous claims, it did not agree to further explore using the data. IRS has not determined whether PEPS can be used for enhancing AOTC compliance for two reasons. First, IRS does not have math error authority (MEA) to correct errors in cases where taxpayer-provided information does not match corresponding information in government databases. IRS would still need to conduct an exam to reject a claim with an ineligible institution. For example, if the EIN on a submitted return is not contained in the PEPS database of eligible institutions, IRS does not have the authority to automatically correct the return and notify the taxpayer of the change. Instead, IRS would have to contact the taxpayer for additional documentation or open an examination to resolve discrepancies between PEPS data and the tax return information. Secondly, IRS believes its current selection process is sufficient because IRS already identifies more potentially fraudulent returns with its filters than it can examine given its current resources. In 2012, IRS identified 1.8 million returns with potentially erroneous education claims and selected 9,574 for exam, for an exam rate of 0.5 percent. To identify these returns for exam, IRS used its pre-refund filters of students claiming the credit for more than 4 years, returns without the 1098-T form, or students in an unexpected age range. The administration submitted legislative proposals for fiscal years 2015 and 2016 that, among other things, would establish a category of correctable errors. Under the proposals, Treasury would be granted MEA to permit IRS to correct errors in cases where information provided by a taxpayer does not match corresponding information provided in government databases. We have previously reported that expanding MEA with appropriate safeguards could help IRS meet its goals for the timely processing of tax returns, reduce the burden on taxpayers of responding to IRS correspondence, and reduce the need for IRS to resolve discrepancies in post-refund compliance, which, as we previously concluded, is less effective and more costly than at-filing compliance. However, Congress has not granted this broad authority. Although correctable error authority may reduce compliance and administrative burden, it raises a number of concerns. Experts have raised concerns that such broad authority could put undue burden on taxpayers. For example, the National Taxpayer Advocate has raised concerns that IRS’s current math error notices are confusing and place a burden on taxpayers as they try to get answers from IRS. The JCT also raised concerns about whether all government databases are considered sufficiently reliable under this proposal. However, an assessment of the completeness and accuracy of PEPS data may be useful for IRS enforcement efforts even in the absence of correctable error authority. First, while IRS believes its current selection process is sufficient, without assessing the PEPS data, it cannot know whether its case selection could be improved by this additional information about ineligible institutions. Second, if an IRS assessment of PEPS data determined that pre-refund corrections based on those data would be effective, the case for correctable error authority would be easier to make to Congress. As our work on strategies for building a results-oriented and collaborative culture in the federal government has shown, stakeholders, including Congress, need timely, action-oriented information in a format that helps them make decisions that improve program performance. Taxpayers can only claim the AOTC for 4 years, but IRS does not have MEA to freeze a refund on a claim that exceeds the lifetime-limit rule. In 2015, TIGTA found that more than 400,000 taxpayers in 2012 received over $650 million for students claiming the AOTC for more than 4 years. According to IRS officials, they have processes to identify students who exceed the 4-year lifetime limit based on information from prior returns. Those returns are candidates for audits. However, as noted earlier, IRS identifies far more candidates for audits than it can perform given current staffing levels. In 2011, we recommended that Congress consider providing IRS with MEA to use tax return information from previous years to ensure that taxpayers do not improperly claim credits or deductions in excess of lifetime limits where applicable. Granting this authority would help IRS disallow clearly erroneous claims, reduce the need for an audit, and promote fairness by limiting claims to taxpayers who are entitled to them. It would also assist taxpayers in self-correcting unintentional mistakes where they may have chosen an incorrect educational tax benefit since they exceeded the lifetime limit. As we recommended in 2011, we continue to believe that Congress should consider providing MEA to be used with credits and deductions with lifetime limits. Any RTCs that contain these limits such as the AOTC should fall under this authority as well if it is granted by Congress. IRS has several efforts intended to educate taxpayers about eligibility requirements and improve compliance including social media messaging, webinars, and tax forum presentations. According to IRS, these efforts are intended to promote participation among taxpayers eligible for these credits, ensure that taxpayers are aware of the eligibility requirements before filing a tax return, and prevent unintentional errors before they occur. Additionally, IRS designated an EITC Awareness Day to increase awareness among potentially eligible taxpayers at a time when most are filing their federal income tax returns. The 10th Annual EITC Awareness Day was January 29, 2016. According to IRS, it currently has limited ability to measure the effectiveness of its outreach efforts. As recently as 2011, IRS officials said they were able to measure the effectiveness of the efforts through a semi-annual survey where they tested, for example, the effect of concentrating messaging in certain areas on taxpayer awareness of the EITC. Although IRS reported it no longer has the funds for that survey, officials said IRS still commissions an annual survey intended to improve services to volunteers and external stakeholders. IRS officials also said that they collect user feedback to assess use and effectiveness of their EITC website and make changes accordingly. For example, after users cited problems with easily locating information on maximum income limits for the EITC, IRS reported that it revised its website to make income information more prominent. To address underutilization of the AOTC, IRS has been working to improve the quality and usefulness of information about the credit. We reported in 2012 that about 14 percent of filers in 2009 (1.5 million of almost 11 million eligible returns) failed to claim an education credit or deduction for which they appeared to be eligible, possibly because filers were unaware of their eligibility or were confused. In response to the recommendation in our 2012 report, IRS conducted a limited review in 2013 that determined that over 15 million eligible students and families may not have been or were not claiming an education benefit. Identifying these potentially eligible taxpayers will help IRS develop a comprehensive strategy to improve use of these tax provisions. We also recommended in 2012 that IRS and Education work together to develop a strategy to improve information provided to tax filers who appear eligible to claim a tax provision but do not. IRS has been implementing this recommendation by coordinating with Education to (1) create an education credit web page on the department’s Federal Student Aid website and (2) improve IRS’s AOTC and Lifetime Learning Credit Communication Plan. To improve understanding of requirements for education credits, IRS has enhanced information and resources on IRS.gov and revised the tax form for claiming education credits (Form 8863, Education Credits American Opportunity and Lifetime Learning Credits) to include a series of questions for the taxpayer to ascertain credit eligibility. IRS has also made efforts to address compliance issues associated with certain tax preparers. As shown in figure 8, unenrolled preparers have the highest error rates for RTCs among preparers. For the EITC, unenrolled preparers have the highest overclaimed rate at 34 percent of total credit claimed, and, as IRS reported, they are the type of preparer most often used by EITC claimants, preparing 26 percent of all EITC returns. In contrast, although comprising only 3 percent of all returns with the EITC, returns prepared by volunteers in the IRS-sponsored Volunteer Income Tax Assistance and Tax Counseling for the Elderly programs have the lowest error rate at 16 percent. IRS’s chief compliance effort for paid preparers is the EITC Return Preparer Strategy designed to identify preparers submitting the highest number of EITC overclaims and tailor education and enforcement treatments to change their behavior. The strategy uses a variety of methods to address preparer noncompliance including (1) educational “knock-and-talk” visits with preparers before filing season; (2) due diligence visits where IRS officials determine whether preparers complied with due diligence regulations, such as documenting efforts to evaluate the accuracy of information received from clients; and (3) warning and compliance letters to preparers explaining that IRS has found errors in their prior returns. The EITC preparers that appear to be associated with the most noncompliance receive the most severe treatments, which include visits from revenue agents, and if necessary, an assessment of penalties: $500 per noncompliant return, or if the preparer used a bad preparer tax identification number, penalties of $50 per return, up to a maximum of $25,000. (The PATH Act of 2015 expanded preparer due diligence requirements and penalties to the CTC and AOTC.) These preparers can also be referred to the Department of Justice for civil injunction proceedings. If fraud is identified, these preparers can be referred to criminal investigation. The project recently found that less severe, lower cost treatments, such as warning letters, affect preparer behavior but more severe, higher cost due diligence visits improve preparer behavior the most. IRS expanded the number of preparers it selected to contact from 2,000 in fiscal year 2012 to around 31,000 in fiscal year 2015. According to IRS data, the EITC Return Preparer Strategy has protected around $1.7 billion in revenue of EITC and CTC/ACTC claims since fiscal year 2012. In fiscal year 2015, the project protected over $465 million in revenue ($386 million in EITC savings and $79 million in CTC/ACTC). Also, the proposed preparer penalties for the 2015 effort totaled $30 million with an overall due diligence visit penalty rate of around 85 percent. Any attempts to improve preparer compliance through increased regulation by Treasury and IRS are likely to require congressional action. IRS issued regulations in 2010 and 2011 to require registration, competency testing, and continuing education for paid tax return preparers and to subject these new registrants to standards of conduct in their practice. However, the courts ruled that IRS did not have the statutory authority to regulate these preparers. In 2014, we suggested Congress consider granting IRS the authority to regulate paid tax preparers. Establishing requirements for paid tax return preparers could improve the accuracy of the tax returns they prepare, not just returns claiming EITC. A variety of proposals have been made to change the design of the EITC, ACTC, and AOTC. The proposals generally focus modifications on one or more elements of the credits such as how much of the credit is refundable, the maximum amount of credit, the level of the phase-in and phase-out income ranges, and rates. Changing these elements will have certain effects on their equity, efficiency, and simplicity that are common across the credits. For example, increasing or decreasing refundability affects the distribution of the credits’ benefits by income level which has implications for whether the change is viewed as increasing or decreasing equity. The following review of proposals has been organized according to the basic design elements of the credits where the effects of certain proposals to change these elements are evaluated according to the standard criteria of a good tax system. Evaluating tax credits requires identifying their purpose (or purposes) and determining their effectiveness. The tax credits reviewed in this report are intended to encourage taxpayers to engage in particular activities, to offset the effect of other taxes, and to provide assistance for certain categories of taxpayers. The EITC, for example, has the purposes of offsetting the payroll tax, encouraging employment among low-income taxpayers and reducing poverty rates. Determining effectiveness can be challenging due to the need to separate the effect of a tax credit from other factors that can influence behavior. Even if the credit claimants increase their subsidized activities, the credits are ineffective if they merely provide windfall benefits to taxpayers who would have engaged in the activities in the absence of the credit. Even when the credits are determined to be effective, broader questions can still be asked about whether they are good tax policy. As explained in our 2012 report, these questions are addressed by applying criteria such as economic efficiency, equity, and simplicity which have long been used to evaluate proposed changes to the tax system. The criteria may sometimes conflict with one another and some are subjective. As a result, there are often trade-offs between the criteria when evaluating a particular tax credit. Economic efficiency deals with how resources are allocated in the economy to produce outcomes that are consistent with the greatest well- being (or standard of living) of society. Tax credits may affect the allocation of resources by favoring certain activities. A credit’s effect on efficiency depends on its effectiveness—whether people change their behavior in response to the credit to do more or less of the activity as intended—and its effect on resource allocation— whether the effect of the credit increases the overall well-being of society. The tax credit can increase efficiency when, for example, it is directed at addressing an externality like spillovers from research where the researchers do not gain the full benefit of their activities and might, without the credit, invest too little in research from the point of view of society as a whole. Finally, a tax credit may be justified as promoting a social good like improving access to higher education for disadvantaged groups. Equity deals with how fair the tax system is perceived to be by participants in the system. There are a wide range of opinions regarding what constitutes an equitable, or fair, tax system. However, there are some principles—for example, a taxpayer’s ability to pay taxes—that have gained acceptance as useful for thinking about the equity of the tax system. The ability-to-pay principle requires that those who are more capable of bearing the burden of taxes should pay more taxes than those that are less capable. Equity judgments based on the ability-to-pay principle can be separated into two types. The first is horizontal equity where taxpayers who have similar ability to pay taxes receive similar tax treatment. Tax credits affect horizontal equity when, for example, they favor certain types of economic behavior over others by taxpayers in similar financial conditions. Views of a credit’s effect on horizontal equity usually depend on whether eligibility requirements that exclude some filers and include others are viewed as appropriate. The second type is vertical equity where taxpayers with different abilities to pay are required to pay different amounts of tax. Tax credits affect vertical equity through how their benefits are distributed among people at different income levels (or other indicators of ability to pay such as their level of consumption spending). Distribution tables, where the tax benefits of the credits are grouped by the income level of the recipients, are often used by policy analysts to help them make informed judgments about the equity of tax policies like the RTCs. People may have different notions about what is a fair distribution but they cannot make a judgment about the fairness of a particular policy without consulting the actual distribution of tax benefits. Simplicity is a criterion used to evaluate tax systems because simple tax systems tend to impose less compliance burden on the taxpayer and less cost on tax administrators than more complex tax systems. Taxpayer compliance burden is the value of the taxpayer’s own time and resources, along with any out-of-pocket costs paid to tax preparers and other tax advisors, invested to ensure their compliance with tax laws. Compliance costs include the value of time and resources devoted to activities like record keeping (for the purpose of tax compliance and not records that would be kept in any case), learning about requirements and planning, preparing and filing tax returns, and responding to IRS notices and audits. The administrative costs include the resources used to process tax returns, inform taxpayers about their obligations, detect noncompliance, and enforce compliance with the provisions of the tax code. However, while simplicity is linked to administrability, they are not always the same. For example, a national sales tax may be relatively simple for taxpayer compliance but difficult to administer as it requires distinguishing between tax-exempt and taxable commodities and between taxable retail sales and nontaxable sales among companies. Changes to the RTCs can be analyzed using the above criteria where the changes are grouped according to the key design elements of the credits that are most affected by the changes. The key design elements are (1) the degree to which the credit is refundable; (2) the eligibility rules for filers and qualifying children or dependent students; (3) the structure of the credit consisting of parameters that determine credit rates and phase- in and phase-out ranges; and (4) the credit’s interaction with other code provisions. As mentioned above, changing these elements will have effects that are common for all the credits. In the following review of proposals, a description of the effect on revenue will be a provided where possible but a dollar estimate of revenue costs cannot be provided because it depends too much on variable details of proposals. For example, increasing refundability would increase revenue costs but the amount would depend, as explained below, on factors like the refundability rate and income or spending threshold of refundability. Refundability can affect judgments about vertical equity by providing a larger share of the tax benefits to lower income filers than a nonrefundable credit does. These filers are more likely to have little or no tax liability and thus are not able to fully benefit from the nonrefundable credit. Refundability, as such, may have little effect on judgments on horizontal equity because these judgments depend chiefly on the eligibility rules which need not be different from those under a nonrefundable credit. The effect of refundability on compliance and administrative costs depends on how the change in refundability is implemented. If the eligibility rules, a major source of complexity as described above, are not changed when refundability is introduced, it may have less impact on compliance burden and administrative costs. However, other structural changes may be needed when refundability is introduced that can add complexity and compliance burden for the taxpayer. For example, additional calculations were made necessary for the CTC when the ACTC was introduced as its partially refundable counterpart with a phase-in range and rate. In addition, administrative burden could increase if the population of claimants changes when refundability is introduced. IRS costs could increase if IRS reviews more returns when the number of claimants increases in response to refundability and taxpayer compliance burden may increase if the claimants include more taxpayers for whom understanding or documenting compliance is more difficult. Changes have been proposed to expand refundability for the currently partially refundable CTC/ACTC and AOTC. For the CTC/ACTC, the refundable ACTC is limited to 15 percent of income in excess of the $3,000 refundability threshold up to a maximum of $1,000 for each child and for the AOTC the refund is limited to 40 percent of qualified spending up to a maximum of $1,000. Modifications of these credits that have been proposed include raising the refundability rate and reducing the refundability threshold for the CTC/ACTC or in the case of the AOTC, making the credit fully refundable. The principal effect of these modifications is to increase the share of benefits going to low-income filers by increasing their access to the credit. In the AOTC, the expansion could also increase effectiveness as described in appendix III by increasing access to the credits by low-income filers who are more responsive to changes in the price of education. The effect on revenue of these changes would vary considerably depending chiefly on the extent to which refundability is increased. Modifications to the RTCs’ eligibility rules affect the criteria of a good tax system by changing taxpayers’ access to the credits. The change in access in turn can affect judgments about equity and effectiveness. For example, expanding the availability of the AOTC to part-time in addition to half-time and full-time students could affect judgments about vertical equity by increasing access for lower income filers if they are more represented among part-time students. This proposal may also increase the effectiveness of the AOTC by targeting more of the population that is more responsive to education price changes, but, as described in appendix III, these effects have not been tested. Another change to eligibility rules that has been proposed for RTC filers would require that SSNs be provided by all claimants of the AOTC and the ACTC and that, in some cases, claimants’ qualifying children or student dependents have SSNs. SSNs are currently required for all EITC claimants and qualifying children but claimants of the other RTCs can use individual taxpayer identification numbers (ITIN). IRS issues ITINs to individuals who are required to have a taxpayer identification number for tax purposes, but who are not eligible to obtain an SSN because they are not authorized to work in the United States. In 2013, 4.38 million tax returns were filed with ITINs (about 3 percent of all returns) which claimed $1.31 billion in CTC, $4.72 billion in ACTC, and $ 204 million in AOTC, or 5 percent, 17 percent, and 1.1 percent of the total credits claimed, respectively. The effect of restrictions on access to the credits by ITIN users depends on whether all filers claiming refundable tax credits and their qualifying children or permit “mixed-use” households to obtain a partial credit. Most households using ITINs are mixed-use households in the sense that they use both ITINs and SSNs on their returns. In 2013, 2.68 million returns (or 61 percent of all ITIN returns) were mixed-use returns having (1) a parent with an ITIN and at least one child with an SSN or (2) a parent with an SSN and at least one child with an ITIN. If the change requires that the parent have an SSN, about 82 percent of current ITIN users will be excluded. A change that permits RTCs for a child or parent with an SSN would exclude 39 percent of current ITIN filers. Restrictions on access to RTCs by ITIN users may affect judgments about vertical equity of the credits. ITIN claimants of the CTC, ACTC, and AOTC tend to have similar or lower levels of income than claimants who do not use ITINs. As figure 9 shows, 31 percent of CTC claimants with ITINs have incomes less than $40,000 while 17 percent of all CTC claimants have incomes as low and 56 percent of AOTC claimants have incomes less than $40,000 while 41 percent of all AOTC claimants have incomes this low. On the other hand, the income levels of the ACTC claimants with ITINs generally track those of all ACTC claimants: 87 percent of all ACTC claimants and 88 percent of ACTC claimants with ITINs have incomes less than $40,000. Restrictions on ITIN use may also have implications for compliance. From 2009 through 2011, credit claimants using ITINs had higher overclaim error rates than other claimants. The overclaim error rate for CTC claimants using ITINs was 14 percent as opposed to 6 percent for all CTC claimants. Similarly, the CTC/ACTC error rate was 32 percent for ITIN users and 10 percent for all claimants. As we discussed above, complying with the eligibility rules can be challenging for everyone and the ITIN users may have greater difficulty from factors like language barriers which could contribute to these higher error rates. The scope of the SSN requirement—whether it includes the taxpayer, the spouse if married filing jointly, or the qualifying dependents—would add to the complexity of administering and complying with the credits. For example, the value of the credit could be apportioned among taxpayers who meet the criteria (e.g., if three of the four individuals claimed on a tax return have SSNs, the taxpayers would be eligible for 75 percent of the total value of the credit). Determining and enforcing compliance with these apportionment rules could be difficult. On the other hand, as noted above, a majority of ITIN households are mixed use and in the absence of an apportionment procedure, taxpayers with valid SSNs could be denied access to the credits entirely. Lastly, the AOTC is likely to be less effective to the extent that ITIN users are excluded because, as they have lower incomes than other claimants, they are more likely to respond to an effectively lower cost of education due to the credit by increasing attendance. A change in the structure of the RTCs can affect all the criteria for evaluating the credits as part of a good tax system. The credit structure includes features that determine the rate at which the credit is calculated. The phase-in range – the range of income levels over which the credit amount is increasing; the plateau range – the range where the credit amount is unchanged and reaches the maximum amount and the phase- out range – where the credit amount is declining. The cut-off amount of income determines the end of the phase-out range and maximum income that can qualify for the credit. All the RTCs have phase-in and phase-out ranges subject to different phase-in and phase-out rates and the EITC also has different values for these ranges that vary according to the number of qualifying children being claimed. The phase-in range generally provides incentives for increasing the activity promoted by the credit: as they work more, EITC recipients receive a larger credit amount and, as they spend more on education, AOTC recipients also get a larger credit. The phase-out ranges generally introduce disincentives by reducing the credit benefit for any increase in the activity that the credit is intended to promote. One of the key trade-offs in this structure is between the size of the maximum credit amount and the steepness of the phase-out range. If the maximum credit amount is increased with no change in the qualifying income cut-off amount, the phase-out range becomes steeper—the phase-out rate increases—and therefore disincentives increase over the phase-out range. In this case, the increase in the maximum credit reduces efficiency in the phase-out range. On the other hand, if disincentives are to be reduced without reducing the maximum credit, the qualifying income cut-off amount must be increased in order to flatten the phase-out range and thereby lower the phase-out rate. However, by increasing the cut-off income amount, the credit becomes available to people with higher incomes, affecting judgments about the equity of the credit and increasing its revenue cost. Structural modifications proposed for the EITC include expanding the credit for childless workers. As described in appendix III, the EITC for childless workers is much lower than the credit for workers with children and has not been shown to have an effect on workforce participation or raising these workers out of poverty. Expanding the credit for childless workers generally means increasing the maximum credit with the follow- on effects described above on other parameters like the phase-out rate. The effect on efficiency, equity, and simplicity will depend upon which parameters are changed and will have similar trade-offs. Although the relative effects of expanding the credit for childless workers will depend on details of the parameter changes, the overall effect is likely to increase the effectiveness of the credit. Increasing the credit for childless workers would increase work incentives for individuals for whom, as described in appendix III, the current EITC is ineffective because it provides little or no work incentive. The expansion of the credit for childless workers could also affect judgments about equity of the EITC by decreasing the percentage of taxpayers living in poverty and by changing how benefits are distributed by income level. The expansion would also affect judgments about horizontal equity concerns arising from the current large disparity in the credit available to filers with and without children. In addition, expanding the EITC for childless workers is unlikely to add complexity to the filing process for taxpayers, although it would increase the number of taxpayers claiming the credit. A major source of complexity for the EITC that increases both compliance and administration burden is determining whether a dependent meets the requirements for a qualifying child. These determinations would not be necessary for the childless worker. However, again depending on specifics of proposals like the size of the maximum credit, the revenue cost could be high. Proposed structural changes for the AOTC can impact its effectiveness by increasing or decreasing access to the credit. Modifications that expand access include increasing the maximum credit, raising the upper limit on income for credit claimants and lowering the phase-out rate. Changes like these may also reduce effectiveness because the credit is now more available to taxpayers for whom it is likely to be a windfall while less of the increase is available to lower income people who are more responsive to education price changes. These changes may also affect judgments about equity because the increase in the phase-out range would increase the share of the credit going to higher income taxpayers. However, the increase in the maximum credit benefits the lower income filers as well as those with higher income. Modifications that reduce access include reducing the maximum credit and phase-out income and increasing the phase-out rate. Modifications like these may concentrate the AOTC’s benefit on lower income individuals and could increase effectiveness by reducing the windfall going to higher income taxpayers. Changes to the CTC/ACTC illustrate how structural changes interact to affect the criteria for evaluating the credit. For example, a modification that increases the credit per child and increases the income limit may have offsetting effects on judgments about equity by reducing the share of benefits going to low-income taxpayers but at the same time increasing the credit amount per child. However, raising the amount of the credit may not benefit lower income taxpayers to the extent that the refundability threshold and rate prevent them from accessing the full credit. Further adjustments such as eliminating the current refundability threshold of $3,000 and making the credit refundable up to $1,000 at a refundability rate of 25 percent may provide more benefits to lower income taxpayers. However, the more adjustments are made the harder it is to determine the net effect on equity. The RTCs share purposes and target populations with a variety of government spending programs and other provisions of the tax code. We previously estimated that, in 2012, 106 million people, or one-third of the U.S. population, received benefits from at least one or more of eight selected federal low-income programs: the ACTC, the EITC, SNAP, SSI, and four others. Almost two-thirds of the eight programs’ recipients were in households with children, including many married families. Without these programs’ benefits, we estimated that 25 million of these recipients would have been below the Census Bureau’s Supplemental Poverty Measure (SPM) poverty threshold. Of the eight programs, the EITC and SNAP moved the most people out of poverty. In addition, the AOTC interacts with other spending provisions like Pell grants and tax provisions like the Lifetime Learning Credit and the deduction for tuition and fees to provide subsidies for college attendance. This shared focus of certain tax benefits has led to consideration of their combined effect on incentives and complexity. As figure 10 shows, the combined effects of the EITC, CTC/ACTC, and the dependent exemption produce a steeper phase-in of total benefit amounts than that attributable to any of the tax benefits alone. As incomes increase, total benefits peak and then decline sharply when the phase-out range of the EITC is reached. How taxpayers respond to the RTCs will depend on the taxpayer’s ability to sort out and assess the combined effects of all these tax benefits. Each RTC was the product of unique social forces and was designed to address a specific social need. As a result, it is unlikely that attempts were made to coordinate and focus on the combined tax rates, combined subsidy rate and combined incentive effects and effects on compliance and administration. The lack of coordination that leads to increased administrative and compliance burden is exemplified in the differing age limits of what constitutes an eligible child for different tax benefits. Interactions like these have raised concerns that the RTCs and other provisions may not be coordinated to be most effective. To increase coordination and transparency, a number of different ways have been proposed to consolidate the tax benefits. Proposals include combining tax benefits for low income taxpayers (such as CTC/ACTC, dependent exemption and child related EITC ) into a single credit or combining child related benefits into a single credit while creating a separate work credit based on earnings and unrelated to the number of children in the family. In a similar vein, proposals have been made to combine education tax benefits by using the AOTC to replace all other education tax credits, the student loan interest deduction and the deduction for tuition and fees. These proposals may also expand certain features of the credit like increasing refundability or making the credit available for more years of post-secondary education. Consolidation can make incentives more transparent to taxpayers and increase simplicity and decrease compliance and administrative burden to the extent it includes harmonizing and simplifying the eligibility requirements. Each year the EITC, ACTC, and AOTC help millions of taxpayers—many of whom are low-income—who are working, raising children, and paying tuition. Nonetheless, challenges related to the RTCs’ design and administration contribute to errors, improper payments, and taxpayer burden. Annual budget cuts have forced IRS officials to make difficult decisions about how best to target declining resources to ensure they can still meet agency-wide strategic goals of increasing taxpayer compliance, using resources more efficiently, and minimizing taxpayer burden. In light of these budget cuts, it is essential that IRS take a strategic approach to identifying and addressing RTC noncompliance in an uncertain budget environment. IRS is working on a strategy to document current EITC compliance efforts and identify and evaluate potential new solutions to address improper payments, but this review does not include the other refundable credits. A more comprehensive approach could help IRS determine whether its current allocation of resources is optimal, and if not, what adjustments are needed. IRS is also missing opportunities to use available data to identify potential sources of noncompliance and develop strategies for addressing them. For example, IRS does not track the number of returns erroneously claiming the ACTC and AOTC identified through screening activities. This information would help IRS deepen its understanding of common errors made by taxpayers claiming these credits; IRS could then use these insights to develop strategies to educate taxpayers. IRS has also not yet evaluated the Department of Education’s PEPS database of eligible educational institutions; these data could help IRS identify potentially erroneous AOTC returns. Finally, although IRS reviews the amount of revenue collected from EITC post-refund enforcement activities, it could not verify the reliability of that data during the timeframe of the GAO audit. By not taking necessary steps to ensure the reliability of that data and linking them to tax assessments to calculate a collections rate, IRS lacks information required to assess its allocation decisions. Periodic reviews of collections data and analyses could help IRS officials more efficiently allocate limited enforcement resources by providing a more complete picture about compliance results and costs. Over the years we have recommended various actions IRS and Congress could take to reduce the tax gap; several of these would also help bolster IRS’s efforts to address noncompliance with these credits. For example, developing a better understanding of sole proprietor noncompliance and linking sole proprietor compliance efforts with broader tax gap reduction could help IRS to identify noncompliant sole proprietor issues and address one of the drivers of EITC noncompliance. Providing IRS with the authority to regulate paid preparers would also help. In addition, as we recommended in 2011, we continue to believe that Congress should consider providing IRS with math error authority to use tax return information from previous years to enforce lifetime limit rules. Any refundable tax credits that contain these limits such as the AOTC should fall under this authority as well if it is granted by Congress. Structural changes to the credits, such as changes to eligibility rules, will involve trade-offs with respect to standard tax reform criteria, such as effectiveness, efficiency, equity, simplicity, and revenue adequacy. To strengthen efforts to identify and address noncompliance with the EITC, ACTC, and AOTC, we recommend that the Commissioner of Internal Revenue direct Refundable Credits Policy and Program Management (RCPPM) to take the following steps: 1. Building on current efforts, develop a comprehensive operational strategy that includes all the RTCs for which RCPPM is responsible. The strategy could include use of error rates and amounts, evaluation and guidance on the proper use of indicators like no-change and default rates, and guidance on how to weigh trade-offs between equity and return on investment in resource allocations. 2. As RCPPM begins efforts to track the number of erroneous returns claiming the ACTC or AOTC identified through pre-refund enforcement activities, such as screening filters and use of math error authority, it should develop and implement a plan to collect and analyze these data that includes such characteristics as identifying timing goals, resource requirements, and the appropriate methodologies for analyzing and applying the data to compliance issues. 3. Assess whether the data received from the Department of Education’s PEPS database (a) are sufficiently complete and accurate to reliably correct tax returns at filing and (b) provide additional information that could be used to identify returns for examination; if warranted by this research, IRS should use this information to seek legislative authority to correct tax returns at filing based on PEPS data. 4. Take necessary steps to ensure the reliability of collections data and periodically review that data to (a) compute a collections rate for post- refund enforcement activities and (b) determine what additional analyses would provide useful information about compliance results and costs of post-refund audits and document-matching reviews. We provided a draft of this report to Treasury and IRS. Treasury provided technical comments which we incorporated where appropriate. In written comments, reproduced in appendix IV, IRS agreed with three of our four recommendations and described certain actions that it plans or is undertaking to implement them. After sending us written comments, IRS informed us it could not verify the reliability of the collections data it provided during the timeframe of our audit. We removed this data from the report and modified our fourth recommendation to address data reliability. The revised recommendation states that IRS should take necessary steps to ensure the reliability of collections data and then periodically review that data to compute a collections rate for post-refund enforcement activities and determine what additional analyses would provide useful information. In response to this recommendation, IRS stated it is taking steps to verify the reliability of the collections data, but further analysis would not be beneficial because the majority of RTC audits are pre-refund. However, we found that a significant amount of enforcement activity is occurring in the post-refund environment. According to IRS data, IRS conducted 87,000 EITC post-refund audits and over 1 million document-matching reviews in 2014. We recognize that gathering collections data has costs and the data have limitations, notably that not all recommended taxes are collected. However, use of these data— once IRS is able to verify its reliability – could better inform resource allocation decisions and improve the overall efficiency of enforcement efforts. In fact, the Internal Revenue Manual states that examiners are expected to consider collectability as a factor in determining the scope and depth of an examination. IRS also stated that previous studies have indicated that post-refund audits of RTCs have a high collectability rate. However, the studies that IRS provided did not include collection rates for the EITC, ACTC, or AOTC. IRS further cautioned that collections can be influenced by factors like the state of the economy; however an appropriate statistical methodology would take such factors into account. Finally, opportunities may exist to reduce the costs of data collection efforts, for example, if coordinated as part of an agency wide analysis of the costs and results of various enforcement efforts. IRS disagreed with our conclusion that its compliance strategy and selection criteria for its prefund compliance program do not consider equity and compliance burden. In its comments, IRS describes its audit selection process but did not explain how it measures equity or compliance burden. Without such measures, it is not possible to assess whether IRS is achieving its strategic goals of increasing taxpayer compliance, using resources more efficiently, and minimizing taxpayer burden. Finally, IRS stated that nonresponse to its taxpayer enquiries is a strong indicator of noncompliance but did not provide data to support this assumption. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Secretary of the Treasury, Commissioner of Internal Revenue, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff has any questions about this report, please contact me at (202) 512-9110 or mctiguej@gao.gov. Contact points for our offices of Congressional Relations and Public Affairs are on the last page of this report. GAO staff members who made major contributions to this report are listed in appendix V. This report (1) describes the claimant population including the number of taxpayers and the amount they claim along with other selected characteristics for the Earned Income Tax Credit (EITC), Additional Child Tax Credit (ACTC), and American Opportunity Tax Credit ( AOTC); (2) describes how the Internal Revenue Service (IRS) administers these credits and what is known about the administrative costs and compliance burden associated with each credit; (3) assesses the extent to which IRS identifies and addresses noncompliance with these credits and collects improperly refunded credits; and (4) assesses the impact of selected proposed changes to elements of the EITC, ACTC, and AOTC with respect to three criteria for a good tax system: efficiency, equity, and simplicity. To describe the taxpayer population claiming the EITC, ACTC, and AOTC, we used the IRS Statistics of Income (SOI) Individual Study for tax years 1999 to 2013. The SOI Individual Study is intended to represent all tax returns filed through annual samples of unaudited individual tax returns (about 330,000 returns in 2013), which are selected using a stratified, random sample. IRS performs a number of quality control steps to verify the internal consistency of SOI sample data. For example, it performs computerized tests to verify the relationships between values on the returns selected as part of the SOI sample and edits data items to correct for problems, such as missing items. The SOI data are widely used for research purposes and include information on returns prior to changes due to IRS audits. We used SOI data to describe the number of returns claiming credits, the credit amounts, and characteristics about credit claimants, such as filing status or adjusted gross income (AGI) for each credit. When necessary, we combined the nonrefundable Child Tax Credit (CTC) with the ACTC, referring to the combined credit as the CTC/ACTC. We did this when their combined effect is at issue or to facilitate comparison with other RTCs that do not break out refundable and nonrefundable components. Similarly we combined the refundable and nonrefundable portions for AOTC estimates. However, unlike the other credit amounts, SOI data do not report the nonrefundable AOTC amounts. Estimating the level of nonrefundable AOTC requires decomposing the nonrefundable education credits into AOTC and other nonrefundable education credit amounts using education expenses amounts and other line items reported on the tax return that determine the taxpayer’s eligibility for claiming the credit. These computations are done by tax return prior to producing the aggregate total AOTC estimates. We reviewed documentation on SOI data, interviewed IRS officials about the data, and conducted several reliability tests to ensure that the data excerpts we used for this report were sufficiently complete and accurate for our purposes. For example, we electronically tested the data for obvious errors and used published data as a comparison to ensure that the data set was complete. The SOI estimates of totals and averages in the report, excluding ITIN estimates, have a margin of error of less than 3.5 percent of the estimates unless otherwise noted. The SOI percentages, excluding ITIN percentages, have a margin of error of less than 1 percentage points unless otherwise noted. Totals based on ITIN returns have a margin of error less than 18 percentage points unless otherwise noted. Percentages and ratios based on ITIN filers have a margin of error of less than 8 percentage points unless otherwise noted. We concluded that the data were sufficiently reliable for the purposes of this report. To describe how IRS administers these credits, we reviewed documentation on program procedures from the Internal Revenue Manual (IRM), internal documents describing audit procedures, and memorandums from IRS officials. We also interviewed IRS officials who oversee or who work on administering the refundable tax credits. To describe what is known about the administrative costs, we reviewed information IRS provided us on processing returns and conducting audits. To supplement these cost data, we spoke with IRS and Treasury officials about challenges IRS faces in administering the credits. To describe the compliance burden associated with each credit, we collected and reviewed IRS forms, worksheets, and instructions for each credit. We also reviewed the National Taxpayer Advocate’s annual reports to Congress, including the most serious issues affecting taxpayers. Finally, we interviewed experts involved with tax preparation to determine challenges taxpayers face when claiming the credits. To assess the extent to which IRS identifies and addresses noncompliance with these credits and collects improperly refunded credits, we reviewed reports by GAO, IRS, the Treasury Inspector General for Tax Administration (TIGTA) National Taxpayer Advocate (NTA), Congressional Research Service (CRS), and Congressional Budget Office (CBO) on challenges IRS faces to reduce EITC, ACTC, and AOTC noncompliance and steps IRS is taking to address those challenges. We also reviewed relevant strategic and performance documents such as annual financial and performance reports; education and outreach plans; annual planning meeting minutes; and project summary reports. We met on a regular basis throughout the engagement with IRS officials responsible for developing and implementing RTC policy to determine the scope and primary drivers of RTC noncompliance as well as the steps IRS is taking to address those challenges. We integrated information from our document review and interviews to describe and asses IRS compliance efforts—including steps IRS is taking to implement specific programs and projects, how IRS’s internal controls ensure that specific efforts are being pursued as intended, how IRS monitors and assesses the progress of specific efforts toward reducing noncompliance, and how IRS incorporates new data to adjust its strategy as needed. We compared IRS efforts to develop, implement, and monitor compliance efforts to criteria in Standards for Internal Control in the Federal Government and federal guidance on performance management. We also applied the criteria concerning the administration, compliance burden, and transparency that characterize a good tax system, as developed in our guide for evaluating tax reform proposals. To evaluate compliance within the refundable credits, we used audit data from the National Research Program (NRP) for tax years 2009 to 2011, the most recent years for which data were available. NRP audits are like other IRS audits, but they can be used for population estimates of taxpayer reporting compliance. The goal of the NRP is to provide data to measure payment, filing, and reporting compliance of taxpayers, which are used to inform estimates of the tax gap and provide information to support development of IRS strategic plans and improvements in workload identification. The NRP audits provide a reflection of the domestic taxpayer populations through an annual sample of returns (about 14,000 returns in 2011), which are selected for NRP audits using a stratified, random sample. One potential source of nonsampling error comes from NRP audits where the taxpayer does not respond to the NRP audit, so audit results may not reflect the taxpayer’s true eligibility for the RTCs. For the calculations in this report, audit observations within the data that correspond to nonrespondent filers are given observation weights of zero (i.e., the observations do not influence the calculations). In contrast, IRS’s compliance study of the EITC produced high and low estimates for overclaim rates, where the former assumes the nonrespondents to be generally noncompliant and the latter assumes the nonrespondents to be as compliant as the respondent observations. Data for analysis include amounts reported by taxpayers on their tax returns and corrected amounts that were determined by examiners. Using NRP data, we estimated the errors and mistakes individual taxpayers made claiming the EITC, ACTC, and AOTC on their Forms 1040, U.S. Individual Income Tax Return. We present the results as a percent of the credit amounts claimed. We reviewed documentation on the NRP, interviewed IRS officials about the data, and conducted several reliability tests to ensure that the data excerpts we used for this report were sufficiently complete and accurate for our purposes. For example, we electronically tested the data for obvious errors and used totals from our analysis of SOI data as a comparison to ensure that the data set was complete. We concluded that the data were sufficiently reliable for the purposes of this report. See appendix II for further discussion of our NRP estimation techniques and for information about the sampling errors of our estimates. To assess the impact of selected proposed changes to elements of the EITC, ACTC, and AOTC, we first identified proposals to improve the three refundable tax credits through a literature review on RTCs. Our literature search started with a review of studies and reports issued by government agencies including GAO, IRS, CRS, CBO, JCT, and TIGTA. We supplemented this search with academic literature and studies produced by think tanks and professional organizations. Additionally, we inquired of agency officials and subject-matter experts for relevant studies. We then interviewed external subject-matter experts from government, academia, think tanks, and professional organizations knowledgeable about refundable tax credits in general and specifically the EITC, ACTC, and AOTC. We spoke to those with expertise on how IRS administers RTCs, how low-income taxpayers claim the credits, and how tax preparers interact with the credits. We conducted interviews to obtain views of experts on criteria commonly used to evaluate refundable tax credits and possible modifications to the credit. The experts were from across the ideological spectrum. The views from these interviews are not generalizable. Based on these interviews and our review of studies, we drew conclusions about the likely impact of modifying elements of the RTC with respect to three criteria we identified for a good tax system: efficiency, equity, and simplicity. We conducted this performance audit from July 2015 to May 2016 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Error rates by credit are computed using National Research Program (NRP) data. The Child Tax Credit (CTC) is combined with the Additional Child Tax Credit (ACTC) and shown as an aggregated credit amount for the CTC/ACTC. The American Opportunity Tax Credit (AOTC) includes refundable and nonrefundable portions, where the refundable portion of the credit benefits the taxpayer regardless of the tax liability. The AOTC estimates combine refundable and nonrefundable portions. The nonrefundable portion of the AOTC is estimated as the proportion of total nonrefundable education credits that is from claiming the AOTC. Eligibility for claiming the different education credits can vary by adjusted gross income (AGI), filing status, and the year the return was filed. Statistics of Income (SOI) data were used to estimate these proportions of AOTC to total nonrefundable education credits. These proportions were multiplied by NRP total nonrefundable credits values for each tax return, which estimates the nonrefundable portion of AOTC for that tax return. Measurement errors for AOTC estimates shown in tables 4 through 8 reflect sampling errors from NRP data only and do not reflect sampling errors from SOI data, which was used to estimate the proportion of nonrefundable AOTC claimed from nonrefundable education credits within NRP data. The credit adjustment or error is the difference between the credit amount originally claimed by the taxpayer and the correct credit amount, as determined by the NRP audit. The net credit adjustments can be separated into audited returns that received negative and positive adjustments. Negative adjustments, or credit overclaims, occur when the taxpayer claimed the credit, but either did not qualify for the credit or the credit amount originally claimed was adjusted downward. Credit overclaim amounts represent a potential for revenue loss to the government, where taxpayers incorrectly claim a tax benefit. Similarly, positive adjustments, or credit underclaims, occur when the taxpayer either failed to claim the credit or the credit amount originally claimed was adjusted upward. Credit underclaim amounts represent a potential expense for the government, where taxpayers forego available tax benefits. Using NRP data (2009 to 2011), the annual average credit and credit adjustment amounts are shown in table 4. The error rates are computed as the credit adjustment amount divided by the net credit amount claimed by the taxpayers prior to the NRP audit, where the credit adjustment may represent all returns claiming, overclaiming, or underclaiming the credit. These error rates for all credit claimants are computed for 2011 and 2009 to 2011, as shown in table 5. The precision of these estimates generally increases when using 3 years instead of a single year of data. The numbers of overclaim and underclaim returns as a percent of all returns claiming the credits are shown in table 6. The overclaim error rates are computed for Schedule C and non-Schedule C returns and for returns based on the preparer of the return, as shown in tables 7 and 8. The following is a summary of the findings in the policy literature of the effect of the current design of the Earned Income Tax Credit (EITC), the Additional Child Tax Credit (ACTC), and the American Opportunity Tax Credit (AOTC) on the effectiveness, efficiency, equity, and simplicity of these credits. This description can be viewed as a baseline against which to compare specific proposals that are advanced to improve the credits. For example, a proposal to change the EITC would be evaluated, at least in part, on its effect on poverty rates judged against the poverty reduction under the current EITC structure. The EITC provides financial assistance to a relatively large proportion of its target population of low-income taxpayers. As mentioned earlier in this report, the EITC was claimed by about 29 million people in 2013 for an average amount of about $2,300.These claimants represent over 85 percent of the eligible population – a large participation rate for a government ant-poverty program. For example, the participation rate for TANF recipients is estimated at about 34 percent and 67 percent for SSI recipients in 2011 and the rate for SNAP was 83 percent in 2012. One purpose of the EITC is to increase employment among low-income taxpayers by providing incentives for claimants to become employed or to increase the hours they work if they are already employed. The empirical evidence shows that the EITC has had a strong effect on labor force participation for certain claimants but much less, if any, effect on hours worked. The EITC has led more single mothers to enter the workforce. However, the effect on labor force participation for secondary workers (for example, a spouse of someone already in the labor force) is inconclusive with studies showing no effect or a small reduction in labor force participation. In addition, studies have shown that the EITC has little or no effect on hours worked by credit claimants already in the labor force. The EITC affects efficiency directly because it changes the behavior of workers that claim it and indirectly because it is funded through the tax system where tax rate differences can also change taxpayer behavior. However, the size of these effects, if any, has not been measured. As described in our 2012 report, a full evaluation of the EITC or any tax expenditure would require information on the total benefits of the credit as well as its costs, including efficiency costs. When examining the impact the EITC has on fairness or equity, research has tended to focus on how the credit affects poverty rates and tax burdens among different groups of recipients. The EITC has also been shown to be effective in reducing the percentage of low-income working people living in poverty. Nearly all studies that we reviewed show that the EITC has had a substantial effect on reducing poverty on average among all recipients and particularly those with children. For example, the U.S. Census Bureau found that in 2012 the refundable tax credits reduced the poverty rate by 3 percentage points for all claimants and by 6.7 percentage points for claimants with children. However, studies show a much smaller effect on poverty for childless workers. A Congressional Research Service analysis found that in 2012 the EITC reduced unmarried and married childless workers’ poverty rates by 0.14 percentage points and 1.39 percentage points respectively. These differences in the effect on poverty rates are not unexpected given the much smaller credit amounts available for childless workers. The effect of the EITC on vertical equity can be judged based, at least in part, on the distribution of the credit’s benefits by income level. As figure 4 earlier in this report shows, EITC claimants have lower incomes than the population of claimants for the other refundable tax credits. As Figure 4 also shows, a greater share of EITC benefits goes to lower-income taxpayers. More than half (62 percent) of the EITC benefits go to taxpayers making less than $20,000. The EITC’s effect on horizontal equity depends on whether its eligibility rules and the credit rates that apply to different types of taxpayers are viewed as appropriate. For example, the current credit has very different rates for taxpayers with and without children (for 2016, a maximum of $503 for childless workers vs. a maximum of $6,242 for families of three or more children). The result is that the EITC benefits mostly families with children and provides very little benefit to childless workers. This difference in credit amounts may reflect, in part, judgements about horizontal equity because larger families may be viewed as having greater costs to achieve the same standard of living than smaller families. However, some studies have shown that differences in EITC benefits may overstate the difference in costs between childless and other families. For example, one study estimated the credit’s benefits in terms of the reduction in effective tax rates and found that benefits were considerably larger for households with children compared to those without even after family incomes were adjusted to account for family size. When the study compared families with incomes equivalent to $10,000, it found that effective tax rates range from -1.47 percent for a married couple with no children to -39.21 percent for a head-of-household return with two children, a difference of more than a third of income Concerns have been raised that the credit may provide unintended incentives that discourage people from marrying to avoid a reduction in their EITC (the “marriage penalty”). The marriage penalty occurs when married EITC recipients receive a smaller EITC as married couples than their combined EITCs as single tax filers. The EITC can create marriage penalties for low-income working couples who qualify for the EITC if, when they marry, the combined household income rises into the EITC phase-out range or beyond, reducing or completely eliminating the credit. However, while limited, the research on this issue indicates that the EITC’s effects on marriage patterns are small and ambiguous. In addition, a marriage bonus is also possible when two very low-income people marry and their earnings increase but not enough to put them into the phase-out range of the credit. The EITC is a complicated tax provision that is difficult for taxpayers to comply with and IRS to administer. As explained earlier in this report, the difficulties arise from the EITC’s complex rules and formulas. In particular, as described above, the rules that determine whether a child qualifies the taxpayer to claim the credit are a major source of most of the taxpayer compliance burden. However, the participation rate for eligible taxpayers is relatively high when compared to other antipoverty programs and administrative and compliance costs are likely to be lower for the EITC. The CTC was created in 1997 as a nonrefundable tax credit for most families to help ease the financial burden that families incur when they have children. Since then, the amount of the credit per child has increased and the current ACTC was introduced to make the CTC credit partially refundable for more families. The current structure of the CTC/ACTC also subsidizes the costs of rearing children by the $1,000 per child credit and employment by the ACTC’s phase-in income range which increases the amount of credit as the taxpayer’s earned income increases. The CTC/ACTC provides financial assistance to a relatively large number of people in its target population of families with children. According to our analysis of IRS data, the CTC/ACTC was claimed on about 36 million returns in 2013 for an average amount claimed of $1,537. The credit supplies up to $1,000 per child in assistance which may be a significant amount for lower income taxpayers but becomes a decreasing percentage of income as income increases toward the phase-out threshold of $110,000 for taxpayers who are married and filing jointly. There is currently little research evaluating the impact of the CTC/ACTC on how taxpayers respond to the wage incentives. The ACTC encourages work by providing a wage subsidy of 15 cents for every dollar of earnings above $3,000 until the credit maximum of $1,000 per child is reached. Because both the ACTC and EITC subsidize earnings over the same income range, researchers find it difficult to isolate the ACTC’s effects on employment from the similarly structured but larger subsidy provided by the EITC In the absence of any evidence concerning the effectiveness of the credits, no conclusions can be drawn about its effect on efficiency. The conversion of the CTC into the broader partially refundable CTC/ACTC may affect judgments about vertical equity by changing the income distribution of tax credit benefits from what it would be under the CTC alone. The ACTC concentrates more of the benefits of the CTC/ACTC among lower income households. Because the ACTC is refundable and the refundability threshold has been reduced to $3,000, more lower income filers with no or very low tax liability can qualify for the ACTC than qualify for the CTC. As figure 11 shows, the ACTC significantly increases the availability of the tax benefit for lower income taxpayers with children. However, according to our analysis of IRS data, the combined CTC/ACTC does not provide as great a share of benefits to lower income taxpayers as the EITC. About 22 percent of the CTC/ACTC is claimed by taxpayers with less than $20,000 in income whereas 62 percent of EITC is claimed by taxpayers in this income range. The difference may be due in part to differences in the phase-in rates and ranges. The ACTC phases in at 15 percent beginning when earnings exceed $3,000 while the EITC has no phase-in threshold and can have a phase-in rate as high as 45 percent depending on the number of children. The EITC benefits are more front-loaded for lower income taxpayers than the CTC/ACTC benefits. Views differ on the effect of the CTC/ACTC on horizontal equity. Some argue that these families should get this tax relief because the additional children reduce their ability to pay relative to families or individuals without children. Others, however, regard children as a choice that parents make about how they use their resources and horizontal equity requires that people with the same income pay similar taxes. Their view is that parents have children because they get satisfaction from this choice and that subsidies are no more warranted for this choice (on an ability to pay basis) than any other purchase the parents make. This disagreement highlights that, although the credit may promote a social good by providing assistance to families with children, the equity of this approach is still a matter of judgment. The CTC/ACTC shares the complexity of the EITC and other tax provisions directed toward children and families which derives from the rules for determining whether a child qualifies for the tax benefit. Like the EITC, the CTC/ACTC has relationship, age, and residency requirements that contribute to complexity. Applying the rules can be complicated because the CTC/ACTC rules may be similar but not always the same as the EITC. For example, the EITC requires that qualifying children be under 19 years old (or under 24 and in school) and the CTC/ACTC requires that the qualifying children be under 17 years old. To further complicate matters, the CTC/ACTC adds a support test to the age residency and relationship requirements. Furthermore, these family centered provisions are currently structured very differently and the amount of the tax benefits change with changing circumstances. The benefits can change when the parent marries, has an additional child or the child gets older, or their income changes. The AOTC provides financial assistance to students from middle-income families (like its predecessor the Hope credit) who may not benefit from other forms of traditional student aid, like Pell Grants. But the AOTC, through its refundability provisions, also expands financial assistance to students from lower income families. Under the AOTC, claimants can receive up to $2,500 per student in credits for qualifying education expenses with up to $1,000 of the credit being refundable. The AOTC was claimed on about 10 million returns in 2013. The Protecting Americans from Tax Hikes Act of 2015 made the AOTC a permanent feature of the tax code, replacing the nonrefundable Hope credit. The effectiveness of the AOTC in getting financial assistance to its target population depends in part on the incidence of the credit. The AOTC’s benefits may be shifted to the educational institutions if the colleges and universities respond to the availability of the AOTC by increasing their tuition. We identified no current research on this institutional response to the AOTC but there is evidence that institutions have not raised tuition in response to the Hope and Lifetime Learning Credits. However, recent research indicates that colleges may react by reducing other forms of financial aid provided by the colleges so that the credit claimants receive no net benefit from the credits. In contrast to the other education credits, the AOTC may also affect tuition if its refundability makes it more available to lower income claimants. If these students attend schools like community colleges with more scope to raise tuitions because their tuition is initially relatively low, they may face increased tuition and a reduced effective value of their AOTC. In this case, if tuitions rise, the cost of college for students ineligible for the AOTC would go up. To the extent that the AOTC reduces the after-tax cost of education, it provides a benefit that may influence decisions about college attendance. A goal of education tax benefits like the Hope Credit has been to increase college attendance and the AOTC shares some of the education cost reducing features of this credit that could increase attendance. Research on education credits has not focused on the AOTC because, due to its relatively recent enactment, data are less available for the AOTC than other education credits like the Hope and Lifetime Learning Credits. Studies have shown some but not a large impact on college attendance due to these credits and other education tax incentives. For example, a study found that tax-based aid increases full-time enrollment in the first 2 years of college for 18 to 19 years old by 7 percent and that the price sensitivity of enrollment suggests that college enrollment increases 0.3 percentage points per $100 of tax-based aid. The AOTC shares features with other education credits related to the timing of the credit that may limit its effectiveness in promoting college attendance. The AOTC may be received months after education expenses are incurred, making it less useful for families with limited resources to pay education expenses. However, the refundability of the AOTC has made it more accessible to lower income households where it may have a greater impact on college attendance than the Hope Credit. Research indicates that students from lower income households are more sensitive to changes in the price of a college education than higher income households when deciding whether to attend college. If the AOTC can be shown to influence attendance decisions it may also affect efficiency by increasing an activity with a positive externality. Education would have a positive externality if the benefit to society of increased productivity and innovation that is due to a more educated populace is greater than the benefit to the individuals who make the college attendance decision and consider only their private benefit. When this is the case, the result may be under-investment in education from a social perspective. By lowering costs, the credit may increase the private return to investment in education, bringing it closer to the social return. The conversion of the Hope Credit into the partially refundable AOTC may affect judgments about vertical equity by changing the income distribution of tax credit benefits. The refundability of the AOTC has increased the share of the credit’s benefits received by lower income filers when compared to its predecessor, the Hope Credit. According to our analysis of IRS data, about 20 percent of the AOTC in 2013 was claimed by filers making less than $20,000 per year. In the case of the Hope Credit in 2008 (the last year this credit was in effect) only about 6.8 percent of the credit was claimed by taxpayers earning less than $20,000 per year. As mentioned above, this shift to lower income taxpayers also has the potential to make the credit more effective and efficient. The effect on horizontal equity as in the case of the child credits described above depends on judgements about whether taxpayers should pay different taxes based on decisions about whether or not to attend college. The complexity of the AOTC is derived largely from its relationship to other education tax preferences. The AOTC is one of a variety of education tax benefits that students or their families can claim which include the Lifetime Learning Credit and the tuition and fees deduction. These tax preferences differ in terms of their eligibility criteria, benefit levels, and income phase-outs. The value of the tax benefit also depends on the amount of student aid taxpayers or their children receive. Evidence indicates that due to this complexity, taxpayers may not know which education tax preference provides the most benefit until they file their taxes—and calculating the tax benefit of each provision can “place substantial demands on the knowledge and skills of millions of students and families. In addition, as described in our 2012 report, filing for AOTC is complex enough to raise concerns that some taxpayers choose not to claim a tax benefit like the AOTC or are not claiming the tax provision that provides the greatest benefit. In addition to the contact named above, Kevin Daly, Assistant Director, Susan Baker, Russell Burnett, Jehan Chase, Adrianne Cline, Nina Crocker, Sara Daleski, Catrin Jones, Diana Lee, Robert MacKay, Ed Nannenhorn, Jessica Nierenberg, Karen O’Conor, Robert Robinson, Max Sawicky, Stewart Small, and Sonya Vartivarian made major contributions to this report.
Refundable tax credits are policy tools available to encourage certain behavior, such as entering the workforce or attending college. GAO was asked to review the design and administration of three large RTCs (the EITC, AOTC, and ACTC). The ACTC is sometimes combined with its nonrefundable counterpart, the Child Tax Credit. For this report GAO described RTC claimants and how IRS administers the RTCs. GAO also assessed the extent to which IRS addresses RTC noncompliance and reviewed proposed changes to the RTCs. GAO reviewed and analyzed IRS data, forms and instructions for claiming the credits, and planning and performance documents. GAO also interviewed IRS officials, tax preparers, and other subject-matter experts. The Earned Income Tax Credit (EITC), the Additional Child Tax Credit (ACTC), and the American Opportunity Tax Credit (AOTC) provide tax benefits to millions of taxpayers—many of whom are low-income—who are working, raising children, or pursuing higher education. These credits are refundable in that, in addition to offsetting tax liability, any excess credit over the tax liability is refunded to the taxpayer. In 2013, the most recent year available, taxpayers claimed $68.1 billion of the EITC, $55.1 billion of the CTC/ACTC, and $17.8 billion of the AOTC. Eligibility rules for refundable tax credits (RTCs) contribute to compliance burden for taxpayers and administrative costs for the Internal Revenue Service (IRS). These rules are often complex because they must address complicated family relationships and residency arrangements to determine who is a qualifying child. Compliance with the rules is also difficult for IRS to verify due to the lack of available third party data. The relatively high overclaim error rates for these credits (as shown below) are a result, in part, of this complexity. The average dollar amounts overclaimed per year for 2009 to 2011, the most recent years available, are $18.1 billion for the EITC, $6.4 billion for the CTC/ACTC, and $5.0 billion for the AOTC. IRS uses audits and automated filters to detect errors before a refund is sent, and it uses education campaigns and other methods to address RTC noncompliance. IRS is working on a strategy to address EITC noncompliance but this strategy does not include the other RTCs. Without a comprehensive compliance strategy that includes all RTCs, IRS may be limited in its ability to assess and improve resource allocations. A lack of reliable collections data also hampers IRS's ability to assess allocation decisions. IRS is also missing opportunities to use available data to identify potential noncompliance. For example, tracking the number of returns erroneously claiming the ACTC and AOTC and evaluating the usefulness of certain third party data on educational institutions could help IRS identify common errors and detect noncompliance. Proposals to change the design of RTCs--such as changing eligibility rules--will involve trade-offs in effectiveness, efficiency, equity, and simplicity. GAO recommends 1) IRS develop a comprehensive compliance strategy that includes all RTCs, 2) use available data to identify potential sources of noncompliance, 3) ensure reliability of collections data and use them to inform allocation decisions, and 4) assess usefulness of third-party data to detect AOTC noncompliance. IRS agreed with three of GAO's recommendations, but raised concerns about cost of studying collections data for post-refund enforcement activities. GAO recognizes that gathering collections data has costs. However, a significant amount of enforcement activity is occurring in the post-refund environment and use of these data could better inform resource allocation decisions and improve the overall efficiency of enforcement efforts.
To obtain information on the number and kinds of issues identified by the FSCPE and bureau analysts and to determine how the bureau used the information developed during the Full Count Review program, we analyzed the work papers submitted by FSCPE members and other participants in the Full Count Review program. We also analyzed data from the bureau’s Count Review Information System, a database that the bureau used to track issues flagged during the review process. We did not independently verify the information it contained. To identify lessons learned for future improvements, we examined bureau training manuals, statements of work, process models, and other documents that described the objectives, processes, and decision-making criteria. We also reviewed the results of a survey the bureau conducted of FSCPE members that asked them to rate their experience with Full Count Review processes and tools, bureau staff, and the overall effectiveness of the Full Count Review program. In addition, we interviewed managers in the bureau’s Population Division and other officials responsible for implementing the Full Count Review program, as well as three FSCPE members. We performed our audit in Washington, D.C., and the bureau’s headquarters in Suitland, Maryland, between May 2001 and April 2002. Our work was done in accordance with generally accepted government auditing standards. On April 26, 2002, we requested comments on a draft of this report from the Secretary of Commerce. The Secretary forwarded the bureau’s written comments on June 11, 2002 (see app. II). We address them in the “Agency Comments and Evaluation” section of this report. Accurate census results are critical because the data are used to reapportion seats in the House of Representatives and for congressional redistricting. Moreover, census data remain an important element in allocating federal aid to state and local governments. With billions of dollars at stake, the data are scrutinized intensely for accuracy. To help ensure the accuracy of census data, the bureau conducted a number of quality assurance programs throughout the course of the census. One such program was the Full Count Review program, which was designed to rapidly examine, rectify if possible, and clear census data files and products for subsequent processing or public release. The bureau expected data analysts to identify data discrepancies, anomalies, and other data “issues” by checking the data for its overall reasonableness, as well as for its consistency with historical and demographic data, and other census data products. The Full Count Review program ran from June 2000 through March 2001. According to bureau officials, because the bureau could not complete the Full Count Review workload without a costly staff increase, some of the analysts’ work was contracted to members of the FSCPE, an organization composed of state demographers that works with the bureau to ensure accurate state and local population estimates. The bureau contracted with 53 FSCPE members who reviewed data for 39 states and Puerto Rico. Bureau employees reviewed data for the 11 remaining states and the District of Columbia without FSCPE representation in Full Count Review. Bureau and FSCPE analysts were to ensure that (1) group quarters were correctly placed or “geocoded” on census maps, and that their population counts and demographic characteristics appeared reasonable and (2) population counts of other areas were in line with population estimates. They were to describe each issue flagged and provide supporting documentation derived from bureau resources and/or resources of the respective state government. Additionally, bureau officials stated that staff from the regional offices reviewed demographic data from the 50 states, Puerto Rico, and the District of Columbia. They focused on identifying inconsistent demographic characteristics and did not necessarily concentrate on any one particular state or locality. The bureau reimbursed state governments for wages and expenses FSCPE members incurred. A separate set of employees from the bureau’s Population Division assessed issues identified by Full Count Review analysts based on (1) the adequacy of the documentation supporting each issue, and (2) whether or not they believed the issue to be resolvable through follow-up research by the bureau. Those issues deemed to have adequate documentation were classified as a “group quarters,” “housing unit,” or “household” or “other” issue. Bureau officials told us that the remaining issues could not be categorized because the nature of the issue could not be determined from the documentation. Bureau data show that after reviewing census data for 39 states and Puerto Rico, FSCPE members identified a total of 1,402 issues, or about 29 percent of the 4,809 issues collectively flagged during Full Count Review (see table 1). Since the bureau has yet to resolve most of these issues, it is not known whether they are necessarily errors. Table 1 also shows that group quarters issues were those most frequently identified by the bureau, accounting for 1,599 of the 4,809 issues identified (33 percent). Group quarters issues relate to suspected discrepancies in the population counts and locations of prisons, dormitories, nursing homes, and similar group living arrangements. Analysts also identified 479 housing unit issues (10 percent of the total), and 288 household issues (6 percent of the total). With housing unit issues, the count of occupied housing units differed from what analysts expected while household issues had population data for occupied residences that differed from what analysts expected. There were also 383 issues (8 percent) that the bureau classified as “other”. They contained questions concerning the demographic characteristics of the data such as age, race, and gender. The bureau was unable to classify 2,060 issues (43 percent). Bureau officials told us that in these cases, analysts did not provide sufficient documentation for the bureau to determine the nature of the issue. According to bureau officials, bureau analysts identified a larger number of issues than FSCPE members—and a far larger number of issues for which the bureau could not assign a type—because bureau analysts used an automated process that compared data from the 2000 Census to independent benchmarks such as the 1990 Census, and flagged any anomalies. This process alerted bureau officials that there were data discrepancies, but did not indicate their nature. By comparison, FSCPE members compared census data to administrative records and other data, and were better able to document specific issues. Examples of the three issue categories and how they were found include: Group quarters issues: Analysts noticed that the group quarters population count in a particular census tract of a large midwestern city appeared to be too high, while a neighboring tract had a correspondingly low group quarters population count. By comparing state administrative records to information obtained from bureau resources, analysts determined that bureau data had placed college dormitories in the wrong tract. Housing unit issues: An urban area had a large amount of redevelopment since the 1990 census. As part of this, several condominiums and apartment complexes were built which substantially increased the number of housing units in a particular census tract. However, when the analyst compared population data from the 1990 Census and 2000 Census, the 2000 Census did not appear to reflect this increase, and it was flagged. Household issues: Data from the 2000 Census appeared to accurately reflect the large amount of new house construction that had taken place within a specific census tract. However, because the population count differed from that indicated by other data sources, the analyst flagged it as an issue to avoid undercounting the population. Bureau officials told us that they used the Full Count Review program to identify systemic errors such as those that could be produced by software problems. None were found. The officials noted that the bureau generally did not use the Full Count Review program to resolve individual issues. According to bureau officials, the bureau corrected data for 5 of the 4,809 issues prior to the December 31, 2000, release of reapportionment data and the April 1, 2001, release of redistricting data. According to bureau officials, FSCPE members identified the five issues, all of which involved group quarters that were placed in the wrong locations, but the population counts were correct. They included (1) a military base in Nevada, (2) 10 facilities at a college in Wisconsin, (3) 9 facilities at a prison in New York City, (4) 14 facilities at a Washington prison, and (5) a federal medical center in Massachusetts. Bureau officials said that the bureau was able to correct these issues for two reasons. First, FSCPE analysts found them early in the Full Count Review program, while the bureau was processing a key geographic data file and was thus able to incorporate the corrections before the data were finalized. Second, the FSCPE analysts had thoroughly documented the issues and recommended how the bureau should correct the errors. The five errors did not require additional research or field verification. Bureau officials told us that they lacked the time to research the remaining issues, as well as field staff to inspect purported discrepancies prior to the release of the public law data. As a result, the bureau missed an important opportunity to verify and possibly improve the quality of the data, and instead the apportionment and redistricting data were released with more than 4,800 unresolved issues. Until these issues are resolved, uncertainties will surround the accuracy of the census data for the affected localities. Some of the issues might be resolved under the CQR program, which the bureau designed to respond to challenges to housing unit and group quarters population counts received from state, local, or tribal governments. However, as shown in table 2, of the 4,804 issues remaining after Full Count Review, 1,994 (42 percent) were referred to CQR, and of these, 537 (11 percent) were accepted for further investigation. The remaining 1,457 issues referred to CQR did not meet the bureau’s documentation requirements and consequently, the bureau took no further action on them (see app. 1 for the disposition of Full Count Review data issues by state). The overall results of the Full Count Review program and FSCPE members’ participation appear to be mixed. On the one hand, the bureau reported that the Full Count Review program was successful in that it met a number of performance goals. For example, the bureau reported that the Full Count Review program was comprehensive in its review of geography and content, and was completed in time to release the public law data on schedule. Moreover, between January and February 2001, the bureau surveyed the 40 entities that participated in Full Count Review and the results suggest that most FSCPE members were satisfied with their Full Count Review experience. For example, respondents indicated that they were generally satisfied with such aspects of the program as its processes and technical tools, bureau staff, and the overall effectiveness of the review in terms of positioning states to use and understand census data. In addition, bureau officials believe the Full Count Review program benefited from FSCPE members’ local demographic knowledge. Nevertheless, our review of the Full Count Review program highlighted several areas where there is room for future improvement. It will be important for the bureau to address these shortcomings as its preliminary plans call for a similar operation as part of the 2010 Census. According to bureau officials, the bureau plans to include a Full Count Review program in census tests it expects to conduct later in the decade. Foremost among the areas in need of improvement is resolving, to the extent practical, a larger number of data issues prior to the release of apportionment data by December 31 of the census year, and redistricting data by April 1 of the following year. We found three factors that limited the bureau’s ability to do so. First, according to bureau officials, resolving individual issues was outside the scope of the Full Count Review program. They explained that the program was poorly integrated with other census operations and units that could have investigated the issues and corrected the data if warranted. This was because the Full Count Review program, with FSCPE participation, was not conceived until February 1999, which was extremely late in the census cycle, coming just 14 months before Census Day, April 1, 2000. The timing of the decision stemmed from the Supreme Court’s January 1999 ruling that prohibited the bureau from using statistical sampling for purposes of congressional apportionment (the bureau originally planned a “one-number” census that would have integrated the results of a sample survey with the traditional census to provide one adjusted set of census numbers). Faced with the larger workload of reviewing two sets of data— adjusted and unadjusted—the bureau decided to enlist the help of FSCPE members in order to meet the deadlines for releasing the public law data. Additionally, the bureau’s decision came after the 1998 dress rehearsal for the 2000 Census, which meant that the bureau had no opportunity to test the Full Count Review program in an operational environment. Bureau officials explained that if more time or staff were available in the future, it would be possible to correct a larger number of individual issues prior to the release of the public law data. They noted that field staff would be needed to help verify issues, and the effort would require close coordination with several bureau units. A second factor that affected the bureau’s ability to correct a larger number of issues was that the bureau’s requirements for documenting data issues were not clearly defined. For example, the training materials we examined did not provide any specific guidance on the type of evidence analysts needed to support data issues. Instead, the training materials told analysts to supply as much supporting information as necessary. This could help explain the variation that we observed in the quality of the documentation analysts provided. Indeed, while some analysts provided only minimal data, others supported issues with state and local administrative records, historical data, photographs, and maps. In some cases, the bureau had difficulty determining the precise nature of an issue or if in fact an issue even existed. In contrast, the CQR program provides comprehensive guidelines on the documentation required for making submissions. The guidance available on the bureau’s CQR web site notes that before the bureau will investigate concerns raised by government and tribal officials, such officials must first supply specific information. The guidance then details the information needed to support boundary corrections, geocoding and coverage corrections, and group quarters population corrections. A third, and related factor that affected the bureau’s ability to resolve a larger number of issues stemmed from the fact that the bureau had no mechanism for managing the Full Count Review workload. Unlike the CQR program, where the bureau required local governments to provide specific documentation before it would commit resources to investigate local data issues, the Full Count Review program had no filter for screening submissions based on the quality of the documentation. Better guidance on documenting issues for the Full Count Review program could make the bureau’s follow-up investigations more efficient. Another area where there is room for improvement concerns the consistency and clarity in which the bureau communicated the objectives of the Full Count Review program and how the bureau planned to use analysts’ input. For example, materials used to train FSCPE members noted that one purpose of Full Count Review was to document issues and “fix what can be fixed.” However, this appears to be inconsistent with statements made by bureau officials, who noted that resolving individual issues was beyond the scope of the Full Count Review program. Moreover, according to one bureau official, it was not clear internally what was meant by “fix what can be fixed.” None of the bureau’s documentation or training manuals that we reviewed explicitly stated that the bureau would only check for systemic errors. Because of the inconsistent message on the purpose of the Full Count Review program, the bureau may have set up the expectation that a larger number of issues would be resolved during Full Count Review. For example, one FSCPE member told us that he expected FSCPE members would identify any geographic discrepancies that contrasted with preliminary census data, and the bureau would investigate and make the necessary changes. He noted that both he and his staff were very “dismayed” to find out that certain discrepancies involving group quarters were not resolved prior to the release of the public law data. Another FSCPE member told us that participants were strongly motivated by the expectation that everything would be done to correct the census data. The Full Count Review program was one of a series of quality assurance efforts the bureau implemented throughout the census that helped ensure the bureau released accurate data. Moreover, FSCPE members’ participation, and specifically their expertise and knowledge of local geography, demographics, and housing arrangements, had the potential to identify data issues that the bureau might have otherwise missed. However, the fact that the apportionment and redistricting data were released with around 4,800 unresolved data issues of unknown validity, magnitude, and impact, is cause for concern, and indicates that the bureau missed an opportunity to verify and possibly improve the quality of the public law data. Given the importance of accurate census data and the resources that bureau staff and FSCPE members invest in the Full Count Review program, it will be important for the bureau to explore how to make better use of the program for correcting potential errors in census data in the future. It will also be important for the bureau to clarify the purpose of the Full Count Review program and convey that purpose clearly and consistently to FSCPE members. Doing so could help ensure that the bureau meets FSCPE members’ expectations. To help ensure the accuracy and completeness of census data and take full advantage of the Full Count Review program and FSCPE members’ participation, we recommend that the Secretary of Commerce direct the bureau to develop ways to resolve a larger number of data issues prior to the release of the public law data. Specifically, consideration should be given to (1) planning the Full Count Review program early in the census cycle and testing procedures under conditions as close to the actual census as possible, (2) integrating the Full Count Review program with other census organizational units and operations to ensure the bureau has sufficient time and field support to investigate issues, (3) developing clear guidelines on the minimum documentation needed for the bureau to investigate individual data issues, (4) categorizing issues on the basis of the quality and precision of the documentation, and investigating first those issues that are best documented and thus more easily resolved, and (5) exploring the feasibility of using staff from the bureau’s regional offices to help investigate data issues in the field prior to the release of public law data. Moreover, to ensure no expectation gaps develop between the bureau and FSCPE members, the Secretary of Commerce should also ensure that the bureau clarifies and consistently communicates to participants the objectives of the Full Count Review program and how the bureau plans to use the information derived from it. The Secretary of Commerce forwarded written comments from the Bureau of the Census on a draft of this report (see app. II). The bureau concurred with all of our recommendations and had no comments on them. The bureau also provided minor technical corrections that we incorporated in our report as appropriate. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from its issue date. At that time, we will send copies to other interested congressional committees, the Secretary of Commerce, and the Director of the Bureau of the Census. Copies will be made available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. Corinna Wengryn, Ty Mitchell, and Robert Goldenkoff made major contributions to this report. If you have any questions concerning this report, please contact me on (202) 512-6806. U.S. General Accounting Office. 2000 Census: Coverage Evaluation Matching Implemented As Planned, but Census Bureau Should Evaluate Lessons Learned. GAO-02-297. Washington, D.C.: March 14, 2002. U.S. General Accounting Office. 2000 Census: Best Practices and Lessons Learned for a More Cost-Effective Nonresponse Follow-Up. GAO- 02-196. Washington, D.C.: February 11, 2002. U.S. General Accounting Office. 2000 Census: Coverage Evaluation Interviewing Overcame Challenges, but Further Research Needed. GAO- 02-26. Washington, D.C.: December 31, 2001. U.S. General Accounting Office. 2000 Census: Analysis of Fiscal Year 2000 Budget and Internal Control Weaknesses at the U.S. Census Bureau. GAO-02-30. Washington, D.C.: December 28, 2001. U.S. General Accounting Office. 2000 Census: Significant Increase in Cost Per Housing Unit Compared to 1990 Census. GAO-02-31. Washington, D.C.: December 11, 2001. U.S. General Accounting Office. 2000 Census: Better Productivity Data Needed for Future Planning and Budgeting. GAO-02-4. Washington, D.C.: October 4, 2001. U.S. General Accounting Office. 2000 Census: Review of Partnership Program Highlights Best Practices for Future Operations. GAO-01-579. Washington, D.C.: August 20, 2001. U.S. General Accounting Office. Decennial Censuses: Historical Data on Enumerator Productivity Are Limited. GAO-01-208R. Washington, D.C.: January 5, 2001. U.S. General Accounting Office. 2000 Census: Information on Short- and Long-Form Response Rates. GAO/GGD-00-127R. Washington, D.C.: June 7, 2000.
To ensure the completeness and accuracy of the 2000 census data, Bureau of the Census analysts were to identify, investigate, and document suspected data discrepancies or issues to clear census data files and products for subsequent processing or public release. They were to determine whether and how to correct the data by weighing quality improvements against time and budget constraints. Because the bureau lacked sufficient staff to conduct a full count review on its own, it contracted out some of the work to members of the Federal-State Cooperative Program for Population Estimates (FSCPE). FSCPE documented 1,402 data issues, 29 percent of the 4,809 issues identified by both FSCPE and bureau analysts during the full count review. Of the 4,809 issues, 1,599 dealt with "group quarters," where counts for prisons, nursing homes, dormitories, and other group living facilities differed from what analysts expected. Of the 1,599 group quarters issues, FSCPE identified 567. Discrepancies relating to housing unit counts, population data, and demographic characteristics accounted for 1,150 issues, 375 of which were identified by FSCPE. Overall, of the 4,809 issues identified during review, 4,267 were not subjected to further investigation by the bureau because of insufficient documentation. Because the bureau's preliminary plans for the 2010 Census include a Full Count Review program, several areas warrant improvement. Foremost among these is the need for the bureau to investigate and resolve a larger number of issues before releasing the public law data.
Over the last three decades, Congress has enacted several laws to assist agencies and the federal government in managing IT investments. For example, to assist agencies in managing their investments, Congress enacted the Clinger-Cohen Act of 1996. More recently, in December 2014, Congress enacted IT acquisition reform legislation (commonly referred to as the Federal Information Technology Acquisition Reform Act or FITARA) that, among other things, requires OMB to develop standardized performance metrics, including cost savings, and to submit quarterly reports to Congress on cost savings. In carrying out its responsibilities, OMB uses several data collection mechanisms to oversee federal IT spending during the annual budget formulation process. Specifically, OMB requires federal departments and agencies to provide information related to their Major Business Cases (previously known as exhibit 300) and IT Portfolio Summary (previously known as exhibit 53). OMB directs agencies to break down IT investment costs into two categories: (1) O&M and (2) development, modernization, and enhancement (DME). O&M (also known as steady-state) costs refer to the expenses required to operate and maintain an IT asset in a production environment. DME costs refers to those projects and activities that lead to new IT assets/systems, or change or modify existing IT assets to substantively improve capability or performance. In addition, OMB has developed guidance that calls for agencies to develop an operational analysis policy for examining the ongoing performance of existing legacy IT investments to measure, among other things, whether the investment is continuing to meet business and customer needs. Nevertheless, federal IT investments have too frequently failed or incurred cost overruns and schedule slippages while contributing little to mission-related outcomes. The federal government has spent billions of dollars on failed and poorly performing IT investments which often suffered from ineffective management, such as project planning, requirements definition, and program oversight and governance. Accordingly, in February 2015, we introduced a new government-wide high-risk area, Improving the Management of IT Acquisitions and Operations. This area highlights several critical IT initiatives underway, including reviews of troubled projects, an emphasis on incremental development, a key transparency website, data center consolidation, and the O&M of legacy systems. To make progress in this area, we identified actions that OMB and the agencies need to take. These include implementing the recently-enacted statutory requirements promoting IT acquisition reform, as well as implementing our previous recommendations. In the last 6 years, we made approximately 800 recommendations to OMB and multiple agencies to improve effective and efficient investment in IT. As of October 2015, about 32 percent of these recommendations had been implemented. We have previously reported on legacy IT and the need for the federal government to improve its oversight of such investments. For example, in October 2012, we reported on agencies’ operational analyses policies and practices. In particular, we reported that although OMB guidance called for each agency to develop an operational analysis policy and perform such analyses annually, the extent to which the selected federal agencies we reviewed carried out these tasks varied significantly. The Departments of Defense (Defense), the Treasury (Treasury), and Veterans Affairs (VA) had not developed a policy or conducted operational analyses. As such, we recommended that the agencies develop operational analysis policies, annually perform operational analyses on all investments, and ensure the assessments include all key factors. Further, we recommended that OMB revise its guidance to include directing agencies to post the results of such analyses on the IT Dashboard. OMB and the five selected agencies agreed with our recommendations and have efforts planned and underway to address them. In particular, OMB issued guidance in August 2012 directing agencies to report operational analysis results along with their fiscal year 2014 budget submission documentation (e.g., exhibit 300) to OMB. Thus far, operational analyses have not yet been posted on the IT Dashboard. We further reported in November 2013 that agencies were not conducting proper analyses. Specifically, we reported on IT O&M investments and the use of operational analyses at selected agencies and determined that of the top 10 investments with the largest spending in O&M, only a Department of Homeland Security (DHS) investment underwent an operational analysis. DHS’s analysis addressed most, but not all, of the factors that OMB called for (e.g., comparing current cost and schedule against original estimates). The remaining agencies did not assess their investments, which accounted for $7.4 billion in reported O&M spending. Consequently, we recommended that seven agencies perform operational analyses on their IT O&M investments and that DHS ensure that its analysis was complete and addressed all OMB factors. Three of the agencies agreed with our recommendations; two partially agreed; and two agencies had no comments. As discussed in our report, federal agencies reported spending the majority of their fiscal year 2015 IT funds on operating and maintaining a large number of legacy (i.e., steady-state) investments. Of the more than $80 billion reportedly spent on federal IT in fiscal year 2015, 26 federal agencies spent about $61 billion on O&M, more than three-quarters of the total amount spent. Specifically, data from the IT Dashboard shows that, in 2015, 5,233 of the government’s nearly 7,000 IT investments were spending all of their funds on O&M activities. This is a little more than three times the amount spent on DME activities (see figure 1). According to agency data reported to OMB’s IT Dashboard, the 10 IT investments spending the most on O&M for fiscal year 2015 total $12.5 billion, 20 percent of the total O&M spending, and range from $4.4 billion on Department of Health and Human Services’ (HHS) Centers for Medicare and Medicaid Services’ Medicaid Management Information System to $666.1 million on HHS’s Centers for Medicare and Medicaid Services IT Infrastructure investment (see table 1). Over the past 7 fiscal years, O&M spending has increased, while the amount invested in developing new systems has decreased by about $7.3 billion since fiscal year 2010. (See figure 2.) Further, agencies have increased the amount of O&M spending relative to their overall IT spending by 9 percent since 2010. Specifically, in fiscal year 2010, O&M spending was 68 percent of the federal IT budget, while in fiscal year 2017, agencies plan to spend 77 percent of their IT funds on O&M. (See figure 3.) Further, 15 of the 26 agencies have increased their spending on O&M from fiscal year 2010 to fiscal year 2015, with 10 of these agencies having over a $100 million increase. The spending changes per agency range from an approximately $4 billion increase (HHS) to a decrease of $600 million (National Aeronautics and Space Administration). OMB staff in the Office of E-Government and Information Technology have recognized the upward trend of IT O&M spending and identified several contributing factors, including (1) the support of O&M activities requires maintaining legacy hardware, which costs more over time, and (2) costs are increased in maintaining applications and systems that use older programming languages, since programmers knowledgeable in these older languages are becoming increasingly rare and thus more expensive. Further, OMB officials stated that in several situations where agencies are not sure whether to report costs as O&M or DME, agencies default to reporting as O&M. According to OMB, agencies tend to categorize investments as O&M because they attract less oversight, require reduced documentation, and have a lower risk of losing funding. According to OMB guidance, the O&M phase is often the longest phase of an investment and can consume more than 80 percent of the total lifecycle costs. As such, agencies must actively manage their investment during this phase. To help them do so, OMB requires that CIOs submit ratings that reflect the level of risk facing an investment. In addition, in instances where investments experience problems, agencies can perform a TechStat, a face-to-face meeting to terminate or turn around IT investments that are failing or not producing results. In addition, OMB directs agencies to monitor O&M investments through operational analyses, which should be performed annually and assess costs, schedules, whether the investment is still meeting customer and business needs, and investment performance. Several O&M investments were rated as moderate to high risk in fiscal year 2015. Specifically, CIOs from the 12 selected agencies reported that 23 of their 187 major IT O&M investments were moderate to high risk as of August 2015. They requested $922.9 million in fiscal year 2016 for these investments. Of the 23 investments, agencies had plans to replace or modernize 19 investments. However, the plans for 12 of those were general or tentative in that the agencies did not provide specificity on time frames, activities to be performed, or functions to be replaced or enhanced. Further, agencies did not plan to modernize or replace 4 of the investments (see table 2). The lack of specific plans to modernize or replace these investments could result in wasteful spending on moderate and high-risk investments. While agencies generally conducted the required operational analyses, they did not consistently perform TechStat reviews on all of the at-risk investments. Although not required, agencies had performed TechStats on only five of the 23 at-risk investments. In addition, operational analyses were not conducted for four of these investments (see table 3). Agencies provided several reasons for not conducting TechStats and required assessments. For example, according to agency officials, several of the investments’ risk levels were reduced to low or moderately low risk in the months since the IT Dashboard had been publicly updated. Regarding assessments, one official stated that, in place of operational analyses, the responsible bureau reviews the status of the previous month’s activities for the development, integration, modification, and procurement to report issues to management. However, this monthly process does not include all of the key elements of an operational analysis. Until agencies ensure that their O&M investments are fully reviewed, the government’s oversight of old and vulnerable investments will be impaired and the associated spending could be wasteful. Legacy IT investments across the federal government are becoming increasingly obsolete. Specifically, many use outdated languages and old parts. Numerous old investments are using obsolete programming languages. Several agencies, such as the Department of Agriculture (USDA), DHS, HHS, Justice, Treasury, and VA, reported using Common Business Oriented Language (COBOL)—a programming language developed in the late 1950s and early 1960s—to program their legacy systems. It is widely known that agencies need to move to more modern, maintainable languages, as appropriate and feasible. For example, the Gartner Group, a leading IT research and advisory company, has reported that organizations using COBOL should consider replacing the language and in 2010 noted that there should be a shift in focus to using more modern languages for new products. In addition, some legacy systems may use parts that are obsolete and more difficult to find. For instance, Defense is still using 8-inch floppy disks in a legacy system that coordinates the operational functions of the United States’ nuclear forces. (See figure 4.) Further, in some cases, the vendors no longer provide support for hardware or software, creating security vulnerabilities and additional costs. For example, each of the 12 selected agencies reported using unsupported operating systems and components in their fiscal year 2014 reports pursuant to the Federal Information Security Management Act of 2002. Commerce, Defense, Treasury, HHS, and VA reported using 1980s and 1990s Microsoft operating systems that stopped being supported by the vendor more than a decade ago. Lastly, legacy systems may become increasingly more expensive as agencies have to deal with the previously mentioned issues and may pay a premium to hire staff or contractors with the knowledge to maintain outdated systems. For example, one agency (SSA) reported re-hiring retired employees to maintain its COBOL systems. Selected agencies reported that they continue to maintain old investments in O&M. For example, Treasury reported systems that were about 56 years old. Table 4 shows the 10 oldest investments and/or systems, as reported by selected agencies. Agencies reported having plans to modernize or replace each of these investments and systems. However, the plans for five of those were general or tentative in that the agencies did not provide specific time frames, activities to be performed, or functions to be replaced or enhanced. Separately, in our related report, we profiled one system or investment from each of the 12 selected agencies. The selected systems and investments range from 11 to approximately 56 years old, and serve a variety of purposes. Of the 12 investments or systems, agencies had plans to replace or modernize 11 of these. However, the plans for 3 of those were general or tentative in that the agencies did not provide specificity on time frames, activities to be performed, or functions to be replaced or enhanced. Further, there were no plans to replace or modernize 1 investment. We have previously provided guidance that organizations should periodically identify, evaluate, and prioritize their investments, including those that are in O&M; at, near, or exceeding their planned life cycles; and/or are based on technology that is now obsolete, to determine whether the investment should be kept as-is, modernized, replaced, or retired. This critical process allows the agency to identify and address high-cost or low-value investments in need of update, replacement, or retirement. Agencies are, in part, maintaining obsolete investments because they are not required to identify, evaluate, and prioritize their O&M investments to determine whether they should be kept as-is, modernized, replaced, or retired. According to OMB staff from the Office of E-Government and Information Technology, OMB has created draft guidance that will require agencies to identify and prioritize legacy information systems that are in need of replacement or modernization. Specifically, the guidance is intended to develop criteria through which agencies can identify the highest priority legacy systems, evaluate and prioritize their portfolio of existing IT systems, and develop modernization plans that will guide agencies’ efforts to streamline and improve their IT systems. The draft guidance includes time frames for the efforts regarding developing criteria, identifying and prioritizing systems, and planning for modernization. However, OMB did not commit to a firm time frame for when the policy would be issued. Until this policy is finalized and carried out, the federal government runs the risk of continuing to maintain investments that have outlived their effectiveness and are consuming resources that outweigh their benefits. Regarding upgrading obsolete investments, in April 2016, the IT Modernization Act was introduced into the U.S. House of Representatives. If enacted, it would establish a revolving fund of $3 billion that could be used to retire, replace, or upgrade legacy IT systems to transition to new, more secure, efficient, modern IT systems. It also would establish processes to evaluate proposals for modernization submitted by agencies and monitor progress and performance in executing approved projects. Our report that is being released today contains 2 recommendations to OMB and 14 to selected federal agencies. Among other things, we recommend that the Director of OMB commit to a firm date by which its draft guidance on legacy systems will be issued, and subsequently direct agencies to identify legacy systems and/or investments needing to be modernized or replaced and that the selected agency heads direct their respective agency CIOs to identify and plan to modernize or replace legacy systems as needed and consistent with OMB’s draft guidance. If agencies implement our recommendations, they will be positioned to better manage legacy systems and investments. In commenting on a draft of the report, eight agencies (USDA, Commerce, HHS, DHS, State, Transportation, VA, and SSA) and OMB agreed with our recommendations. Defense and Energy partially agreed with our recommendation. Defense stated that it planned to continue to identify, prioritize, and manage legacy systems, based on existing department policies and processes, and consistent to the extent practicable with OMB’s draft guidance. Energy stated that while the department continues to take steps to modernize its legacy investments and systems, it could not agree fully with our recommendation because OMB’s guidance is in draft and the department has not had an opportunity to review it. Defense and Energy’s comments are consistent with the intent of our recommendation. Upon finalization of OMB’s guidance, we encourage both agencies to implement OMB’s guidance. In addition, Justice and the Treasury stated that they had no comment on their recommendations. In summary, O&M spending has steadily increased over the past 7 years and as a result, key agencies are devoting a smaller amount of IT spending to DME activities. Further, legacy federal IT investments are becoming obsolete and several aging investments are using unsupported components, many of which did not have specific plans for modernization or replacement. This O&M spending has steadily increased and as a result, key agencies are devoting a smaller amount of IT spending to DME activities. To its credit, OMB has developed a draft initiative that calls for agencies to analyze and review O&M investments. However, it has not finalized its policy. Until it does so, the federal government runs the risk of continuing to maintain investments that have outlived their effectiveness and are consuming resources that outweigh their benefits. Chairman Chaffetz, Ranking Member Cummings, and Members of the Committee, this completes my prepared statement. I would be pleased to respond to any questions that you may have at this time. If you have any questions on matters discussed in this testimony, please contact David A. Powner at (202) 512-9286 or at pownerd@gao.gov. Other key contributors include Gary Mountjoy (assistant director), Kevin Walsh (assistant director), Jessica Waselkow (analyst in charge), Scott Borre, Rebecca Eyler, and Tina Torabi. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The President's fiscal year 2017 budget request for IT was more than $89 billion, with much of this amount reportedly for operating and maintaining existing (legacy) IT systems. Given the magnitude of these investments, it is important that agencies effectively manage their IT O&M investments. GAO was asked to summarize its report being released today that (1) assesses federal agencies' IT O&M spending, (2) evaluates the oversight of at-risk legacy investments, and (3) assesses the age and obsolescence of federal IT. In preparing the report on which this testimony is based, GAO reviewed 26 agencies' IT O&M spending plans for fiscal years 2010 through 2017 and OMB data. GAO further reviewed the 12 agencies that reported the highest planned IT spending for fiscal year 2015 to provide specifics on agency spending and individual investments. The federal government spent more than 75 percent of the total amount budgeted for information technology (IT) for fiscal year 2015 on operations and maintenance (O&M) investments. Specifically, 5,233 of the government's approximately 7,000 IT investments are spending all of their funds on O&M activities. Such spending has increased over the past 7 fiscal years, which has resulted in a $7.3 billion decline from fiscal years 2010 to 2017 in development, modernization, and enhancement activities. Many IT O&M investments in GAO's review were identified as moderate to high risk by agency CIOs and agencies did not consistently perform required analysis of these at-risk investments. Until agencies fully review their at-risk investments, the government's oversight of such investments will be limited and its spending could be wasteful. Federal legacy IT investments are becoming increasingly obsolete: many use outdated software languages and hardware parts that are unsupported. Agencies reported using several systems that have components that are, in some cases, at least 50 years old. For example, the Department of Defense uses 8-inch floppy disks in a legacy system that coordinates the operational functions of the nation's nuclear forces. In addition, the Department of the Treasury uses assembly language code—a computer language initially used in the 1950s and typically tied to the hardware for which it was developed. OMB recently began an initiative to modernize, retire, and replace the federal government's legacy IT systems. As part of this, OMB drafted guidance requiring agencies to identify, prioritize, and plan to modernize legacy systems. However, until this policy is finalized and fully executed, the government runs the risk of maintaining systems that have outlived their effectiveness. The following table provides examples of legacy systems across the federal government that agencies report are 30 years or older and use obsolete software or hardware, and identifies those that do not have specific plans with time frames to modernize or replace these investments. In the report being released today, GAO is making multiple recommendations, one of which is for OMB to finalize draft guidance to identify and prioritize legacy IT needing to be modernized or replaced. In the report, GAO is also recommending that selected agencies address obsolete legacy IT O&M investments. Nine agencies agreed with GAO's recommendations, two partially agreed, and two stated they had no comment. The two agencies that partially agreed, the Departments of Defense and Energy, outlined plans that were consistent with the intent of GAO's recommendations.
Medicaid is the third largest social program in the federal budget and is also one of the largest components of state budgets. Although it is one federal program, Medicaid consists of 56 distinct state-level programs—one for each state, District of Columbia, Puerto Rico, and each U.S. territory. Each state has a designated Medicaid agency that administers its program under broad federal guidelines. The federal government matches state Medicaid spending for medical assistance according to a formula based on each state’s per capita income. The federal share can range from 50 cents to 83 cents of each Medicaid dollar spent. HCFA administers the Medicaid program at the federal level. In accordance with the Medicaid statute, it sets broad guidelines for the states, but within them, each state establishes its own eligibility standards; determines the type, amount, duration, and scope of covered services; sets payment rates; oversees the integrity of its program; and develops its administrative structure. States are required to describe the nature and scope of their program in a comprehensive written plan submitted to HCFA—with federal funding for state Medicaid services contingent on HCFA’s approval of the plan. HCFA is responsible for ensuring that state Medicaid programs meet all federal requirements. In addition to Medicaid, HCFA also has responsibility for administering Medicare, a federal health insurance program for certain disabled persons and those 65 years and older. While Medicaid and Medicare have different structures and governance, some low-income beneficiaries and many providers participate in both programs. There are also—in 47 states and the District of Columbia—separate MFCUs that are responsible for investigating and prosecuting Medicaid provider fraud, patient abuse, and financial fraud. In 1999, MFCUs received authority to investigate cases involving Medicare fraud as well. Most MFCUs are part of the state Attorney General’s office, and most prosecute the cases they investigate. MFCUs that have been federally certified for more than 3 years receive 75 cents in federal funding for every dollar they spend, up to a limit established by federal regulations. In addition to state Medicaid agencies and MFCUs, other state and federal agencies assist in dealing with Medicaid improper payments. Because of their responsibilities to ensure sound fiscal management in their states, state auditors or state inspectors general may become involved in Medicaid payment safeguard activities through efforts such as testing payment system controls or investigating possible causes of mispayment. At the federal level, the Federal Bureau of Investigation (FBI) and the OIG investigate, and U.S. Attorneys prosecute, certain Medicaid fraud cases, such as those that involve multiple states or also involve fraud against other health care programs. Funding for these agencies to pursue fraud and abuse in federal health care programs is available from the Health Care Fraud and Abuse Control Program (HCFAC). Established in 1996 by Section 201 of the Health Insurance Portability and Accountability Act (HIPAA), it funds, consolidates, and strengthens federal fraud control efforts under the Department of Justice (DOJ) and HHS. This fund provided $154.3 million in fiscal year 2000 to the OIG and DOJ. Separately, the FBI received an additional $76 million in HIPAA-specified funding for fiscal year 2000. Medicare has been the major focus of this effort, but Medicaid has also benefited. In its joint report with DOJ on the HCFAC fund, HHS reported returning nearly $45 million dollars to Medicaid as a result of these fraud control activities for fiscal years 1997 through 1999. With state and federal Medicaid payments projected to total $221.6 billion this fiscal year, even a small percentage loss due to improper payments represents a significant loss to taxpayers. The magnitude of improper payments throughout Medicaid is unknown, although a few states have attempted to determine the level by measuring the accuracy of their program’s payments. An even more difficult portion of improper payments to identify are those attributable to intentional fraud—recent cases in California and other states provide examples of losses due to fraudulent activities. There are no reliable estimates of the extent of improper payments throughout the Medicaid program. However, at least three states have conducted studies to try to measure their program’s payment accuracy rates and pinpoint where payment vulnerability occurs, with varied success. Illinois, in 1998, reported an estimated payment accuracy rate of 95.3 percent, with a margin of error of +/- 2.3 percentage points, of total dollars paid. The estimate was based on a sample of individual paid claims, for which the state reviewed medical records and interviewed patients to verify that services were rendered and medically necessary. As a result of this audit, the state identified key areas of weakness and targeted several areas needing improvement. For example, because the Illinois payment accuracy review indicated that nearly one-third of payments to nonemergency transportation providers were in error, the Illinois Medicaid program has taken a number of steps to improve the accuracy of payments to this provider type. Texas, also in 1998, reported an estimated payment accuracy rate of 89.5 percent in the acute medical care fee-for-service portion of the program. However, in making that estimate, reviewers had trouble locating many patients and records due to statutorily imposed time constraints. Further work led the state, in 1999, to revise the estimate to between 93.2 and 94 percent. In developing the estimate, the state identified ways to reduce improper payments through expanded use of computerized fraud detection tools, such as matching Medicaid eligibility records with vital statistics databases to avoid payments for deceased beneficiaries. In January 2001, Texas reported that a more recent study estimated a payment accuracy rate of 92.8 percent in its acute medical care fee-for-service payments. Kansas, in 2000, reported an estimated payment accuracy rate of 76 percent with a margin of error of +/- 9 percentage points. The estimate was based on a sample of individual paid claims, as in Illinois. The payment accuracy study recommended increased provider and consumer education, as well as improvements to computerized payment systems. In addition, Kansas officials undertook focused reviews of certain types of claims that were identified as vulnerable to abuse. In their payment accuracy studies, these states commonly identified errors such as missing or insufficient documentation to show whether the claim was claims for treatments or services that were not medically necessary; claims that should have been coded for a lower reimbursement amount; and claims for treatments or services that the program did not cover. Because payment accuracy studies can provide useful guidance toward developing cost-effective measures to reduce losses, HCFA has sought HCFAC funding for grants to states for such efforts. HCFA also has established a workgroup to develop guiding principles, definitions, and reporting protocols for payment accuracy studies. HCFA and its workgroup of state officials are also trying to assess whether, given the many differences among the various Medicaid programs, a common methodology can be developed that would allow valid comparison of error rates across states. State payment accuracy studies may not fully identify improper payments that might be related to fraud, due primarily to fraud’s covert nature. Losses due to fraudulent billing and other related practices are difficult to quantify. However, these amounts can be significant, as was demonstrated recently in California’s program, in which millions of dollars were paid to numerous fraudulent providers. Since July 1999, a state-federal task force targeting questionable pharmaceutical and durable medical equipment suppliers for improper billing has charged 115 providers, wholesalers, and suppliers in cases involving about $58 million in fraud. At least 69 individuals have been convicted and paid about $20 million in restitution. An additional 300 entities are being investigated for suspected fraud that could exceed $250 million. In one case, a family-run equipment company defrauded the program out of more than $9 million by submitting thousands of claims for equipment and supplies that were never delivered to patients. Investigators also found the following. “Bump and run” schemes in which individuals bill for a few months for services that are not rendered, stop billing before being detected, and then start again under a new name. Wholesalers who gave pharmacies and suppliers false invoices to substantiate false claims. Use of “marketers” who recruit and pay beneficiaries $100 or more to lend their Medicaid identification cards for use in improper billing. Use of beneficiary identification numbers stolen from a hospital to bill for services not provided. Use of identification from providers who had retired or moved out of the state. Purchase of an established business in order to fraudulently bill under its name. Administrative weaknesses in the California Medicaid program made these activities easier to accomplish. For example, the program was issuing new billing numbers to individuals with demonstrated histories of current or past questionable billing practices. The program allowed providers to have multiple numbers, and applicants did not have to disclose past involvement in the program or any ongoing audits. As a result, in some cases, individuals who had past questionable billings applied for a new provider number and were reinstated with full billing privileges. In addition, applicants for a billing number for a business that needed a license—such as a pharmacy— did not have to disclose that actual owners were not the licensed individuals. This allowed unlicensed individuals to pay medical professionals for the use of their licenses to obtain a provider number. California has taken steps to try to close such loopholes. In addition to single-state schemes, fraudulent activities sometimes involve large-scale multistate schemes. One case led to a $486 million civil settlement in early 2000—one of the largest health care settlements ever. It followed a 5-year investigation of a dialysis firm billing Medicare and several state Medicaid programs for intradialytic parenteral nutrition that was not necessary or not provided in the quantity claimed. The company had an ownership interest in a laboratory that also double-billed for unnecessary tests and paid kickbacks to nephrologists and clinics that used the laboratory. In another case, a national laboratory headquartered in Michigan was ordered to pay $6.8 million in a multistate settlement for billing Medicare and five Medicaid programs for bogus medical tests. Improper billing schemes such as the ones discussed above are the principal types of fraud cases developed by MFCUs, according to MFCU directors responding to our survey. Improper billing includes “upcoding,” in which the provider misrepresents treatment provided and bills for a more costly procedure; “ghost” or “phantom” billing, in which a provider bills for services never provided; and delivering more services than are either necessary or appropriate for the patient’s diagnosis. However, other types of fraud occur, including improper business practices—such as kickbacks for steering services to a provider—or misrepresentation of qualifications, such as an individual falsely claiming to be a physician. MFCU directors have found a wide variety of providers involved in fraud, including physicians, dentists, pharmacies, durable medical equipment providers, and transportation providers. Beneficiaries also engage in fraud, either by misrepresenting assets to become eligible for the program, lending or selling their identification numbers for another’s use, or obtaining products such as pharmaceuticals for resale. Fraud is not merely a financial concern—it can also pose a risk to the physical health of beneficiaries. For example, providers have drawn blood unnecessarily in order to better substantiate billing for tests that were not performed, and dentists have conducted extensive unnecessary dental work on beneficiaries in order to bill the program. The amount of resources and effort that state Medicaid programs devote to protecting the integrity of their programs varies. Some states have focused their efforts on preventing improper payments by strengthening their prepayment claims checking. States’ abilities to detect improper payments also vary, in part because some lack sophisticated information technology that can help them analyze and track instances of inappropriate billing. Strong leadership in certain states is resulting in stricter laws and restructured operations to better ensure that the Medicaid program pays claims appropriately. Resources for addressing improper Medicaid payments are generally modest. In our survey, 25 state Medicaid agencies reported spending one- tenth of 1 percent or less of program expenditures on these efforts. Others, such as California, spend about one-fourth of 1 percent of program expenditures on preventing and detecting improper payments. However, this is not unique to Medicaid. As we recently reported, the Medicare program devotes little more than one-fourth of 1 percent of its program expenditures to safeguarding payments. As a result, we recommended that the Congress increase funding for these important activities. All states forgo some of the federal funds available to help their MFCUs investigate and prosecute fraud. MFCUs, once federally certified and in operation for 3 years, are eligible for 75 cents in federal funds for every dollar they spend, up to a maximum federal contribution of the greater of $125,000 per quarter or one-fourth of 1 percent of the state Medicaid program’s total expenditures in the previous quarter. However, only 10 percent of MFCUs receive enough state funding to obtain even half of the allowed federal match. States ranged from having enough state funding to obtain less than 7 percent to having up to 86 percent of their allowed federal match. Many Medicaid state agency fraud control and MFCU officials reported gaps in staff, staff training, or technology acquisition. Many state officials said that they wanted to increase their workforce by hiring staff with specific skills, such as auditing, computer analysis, and clinical knowledge, and adding the technology to analyze large amounts of claims data. For example, in our survey, only 14 of 53 state agencies reported that they have statisticians to help collect, organize, and analyze data to spot improper billing practices. Further, although information technology to store and analyze large amounts of data easily has improved significantly in recent years, some states reported using very old information technology to assess program billing. Four state Medicaid agencies reported using software that is at least 15 years old to assess claims before payment, and three state Medicaid agencies reported using software at least that old to analyze claims after payment to ensure the billings were proper. While about half of the state agencies and a third of MFCUs reported that their program integrity unit budgets were steady or declining in the previous 3 years, we did learn that other states showed a more promising trend. In our survey, 8 state Medicaid agencies and 4 MFCUs reported that their budgets for program integrity activities had increased significantly, while another 15 state agencies and 27 MFCUs reported that their budgets had increased somewhat. As a result, they reported that they were able to hire additional staff and increase program safeguards. For example, Connecticut’s increased funding allowed the state Medicaid agency to hire additional staff to increase audits and site visits to providers. Georgia’s state Medicaid agency also received increased funding, which allowed it to increase staffing levels and to make a number of additional improvements, such as opening an office to cover the southern part of the state. Preventing improper payments can be a cost-effective way to protect program dollars. Prevention can help avoid what is known as “pay and chase” in which efforts must be made to detect and attempt to recover inappropriate payments after they have been made. Such postpayment efforts are often costly and typically recover only a small fraction of the identified misspent funds, although they can identify parts of the program where controls, such as on payments, may need strengthening. States use a variety of preventive approaches—such as prepayment computer “edits,” manual reviews, provider education, and thoroughly checking the credentials of individuals applying to be program providers—and the scope and effectiveness of these activities varies among the states. All 41 of the state Medicaid agencies responding to our survey about prepayment claims review reported that they use such reviews to varying degrees. These include automated computer “edits” and manual reviews to help ensure payment accuracy. Typically, their edits check the mathematical accuracy of claims, the correct use of payment codes, and patients’ Medicaid eligibility. Such reviews help ensure that the services listed on the claim are covered, medically necessary, and paid in accordance with state and federal requirements. For example, an edit can be used to deny a claim for obstetrical care for a male beneficiary. Some states have thousands of such edits in their payment systems that identify duplicate claims, invalid dates, missing codes, or claims for services that conflict with previous care provided to the beneficiary. Although widely used, recent experiences from several states that are aggressively working to detect overpayments suggest that their existing prepayment edits have not been catching various types of improper payments. A few states have hired a private contractor to help analyze claims data to uncover overpayments. For example, with the aid of this contractor, Florida learned that it was paying some pharmacies 10 times more than it should for asthma inhalant because its edit did not stop claims listing the amount in unit doses rather than in grams, as required. Following this contractor’s overpayment review, Kentucky made edit changes it estimates will prevent $2 million in improper payments. This same contractor assisted Washington in making edit and other policy changes that are anticipated to save $4 million. Investigations in other states have also identified the need for new and revised edits. Some MFCU officials reported that they had advised their state agencies to strengthen certain edits based on the cases they had investigated. For example, the North Carolina MFCU suggested an edit to its state agency to identify and bundle laboratory services that should not have been billed separately. Also, the Louisiana MFCU reported that it had recommended that its Medicaid agency develop an edit to prevent duplicate payment of children’s medical screenings and physician visits and to ensure that physicians and certified nurse practitioners working together do not send in duplicate claims for the same services. Manual reviews before claims are paid can further help prevent improper payments, but they are resource-intensive, thus limiting the number of such reviews that can be done cost effectively. Manual reviews involve a trained specialist—such as a nurse—examining documentation submitted with a claim and possibly requesting additional information from providers, beneficiaries, and other related parties. Because of the cost and time involved, manual prepayment review is often targeted to certain providers. For example, if a provider’s claims pattern is substantially different from his or her peers, or if there is a sudden increase in claims volume for a given provider, or if there is substantial evidence of abuse or wrongdoing, payment may be withheld until a reviewer determines whether the aberrations or increases are appropriate and can be substantiated. Table 1 shows examples of prepayment reviews currently being used by some states. Because billing mistakes can be inadvertent, educating providers on how to comply with program rules and file claims correctly can often prevent errors. For example, in our survey, almost all state Medicaid agencies reported initiating meetings with providers, usually to discuss coding and policy changes. Seventeen state Medicaid agencies reported that their staff met with providers to discuss safeguarding the confidentiality of provider and beneficiary Medicaid numbers. In addition, 17 state Medicaid programs alerted providers to prevalent fraud schemes. State Medicaid agencies also reported conveying information on proper billing procedures to providers through a variety of other means, such as letters, bulletins, Internet sites, and professional meetings. Some states use more extensive provider enrollment measures to help prevent dishonest providers from entering the Medicaid program and to ensure better control over provider billing numbers. While all states collect some basic information on providers, states have considerable latitude in how they structure their provider enrollment processes. In addition, states are required to check if those providers who should be licensed are licensed and whether the provider has been excluded from participating in other federal health programs. Checking a provider’s criminal record and business site has been found to be important by states such as Florida to ensure that providers entering the program are legitimate. Nine of the states responding to our survey reported having a provider enrollment process that included all four of these checks—licensure, excluded provider status, criminal record, and business location verification—in their provider enrollment processes. Table 2 provides examples of these activities. Most Medicaid agencies reported checking whether applicants whose practice requires licensure had a valid license and whether they had been excluded from participating in other federal health programs. However, less than half of the states responding to our survey reported checking whether applicants have criminal records. While conducting such checks on a targeted basis might be useful in helping to protect the program, they can be time-consuming and difficult to perform, according to states that have attempted them. This is due in part to often inaccurate and incomplete statewide databases containing records on criminal convictions. Nineteen of 52 state Medicaid programs reported that they conducted site visits to determine if an applicant had a bona fide operation. Of those that do conduct site visits, most limit them to particular types of providers they believe have a greater likelihood of abusing the program. For example, Kansas Medicaid officials reported that, based on a risk analysis, there is a greater risk that durable medical equipment suppliers are not legitimate providers and, therefore, the Medicaid program conducted site visits of these applicants. Many states allow providers, once enrolled, to bill the program indefinitely without updating information about their status. Poor control over provider billing numbers can make Medicaid programs more vulnerable to improper payment. In our survey, 26 states reported allowing providers to continue to bill indefinitely while other states had an enrollment time limit, which often varied by provider type. However, 33 states reported that they cancel inactive billing numbers—generally for providers who have not billed the program for more than 1 to 3 years. Such efforts can be important, as questionable providers have been known to keep multiple billing numbers “in reserve” in case their primary billing number is suspended. In California, some individuals falsely billed the Medicaid program using the numbers of retired practitioners. Just as states are uneven in their efforts to prevent improper payments, they also vary in their ability to detect improperly paid claims. Because prepayment reviews cannot catch all erroneous claims, Medicaid programs must have systems in place to retrospectively review paid claims. While some states are using software from the early 1980s to analyze paid claims, other states—such as Texas and Washington—are implementing state-of- the-art systems to improve their ability to detect and investigate potential improper payments. Each Medicaid state agency is required to have an automated claims processing and retrieval system that can be used to detect postpayment errors. These automated claims processing systems, known as Medicaid Management Information Systems (MMIS), contain a Surveillance and Utilization Review Subsystem (SURS) that state agency officials can use to identify providers with aberrant billing patterns. For example, these might include providers with a large increase in Medicaid activity or with billing patterns that are significantly different from their peers and that result in enhanced reimbursement. Almost all states reported conducting focused reviews or investigations when a provider’s billing was aberrant to determine if any improper payments had been made. State Medicaid officials told us that when their state Medicaid agency discovers that improper payments have been made, it takes action to recover the improper payment, and, if warranted, refers the provider to its state MFCU for possible criminal investigation and prosecution. Providers who have been identified as having significant billing problems generally receive continued scrutiny if they remain in the program. The systems used to uncover such aberrant billing—MMIS and SURS— were developed in the early 1980s when computer algorithms to identify potentially inappropriate claims were less sophisticated and analysis required more programming skill. Newer systems allow staff to use desktop computers to directly query large databases of claim, provider, and beneficiary information, without requiring the assistance of data processing professionals. Several state officials reported that buying or leasing these upgraded computer systems and hiring staff skilled in their use would be their top priority if they had more funding. Other states are already purchasing or leasing such systems, as the following examples illustrate. Texas is using private contractors to design, develop, install, and train staff to use a state-of-the-art system intended to integrate detection and investigation capabilities. It is intended to allow the state to uncover potentially problematic payment patterns that old SURS profiling methodologies would have missed. It also includes a “neural network” that is intended to “learn” from the data it analyzes and adjust its algorithms to identify previously overlooked aberrant payment patterns. The system is further enhanced with modules designed to help develop cases for prosecution. The first 2 years of the project cost Texas $5.8 million, but according to state Medicaid officials, Texas had already collected $2.2 million in overpayments in the system’s first year of operation. Kentucky has hired a private contractor to use an advanced computer system to analyze claims payment data. It is paying that contractor through contingency fees based on overpayment collections related to these efforts. Using claims data from January 1995 through June 1998, the contractor identified $137 million in overpayments, of which the state has collected between $4 and $5 million. That compares to previous recovery efforts by the state that, on average, netted about $75,000 a year. Under its new Payment Integrity Program, Washington is using a private contractor to design, develop, install, and train staff to analyze data on an advanced computer system. The system improves access to data and includes fraud and abuse identification software with prepackaged algorithms to analyze the data and identify overpayments, as well as develop leads that would need further investigation. It also allows agency staff to develop algorithms and perform their own online reviews. Since the program started in June 1999, the contractor and state agency staff have identified overpayments totaling more than $2.95 million. Some states have developed detection strategies that combine the use of advanced technology with special investigative protocols. For example, New Jersey conducted special audits of transportation services, cross- matching data on transportation claims to beneficiary medical appointments, and sometimes contacting providers to confirm that the beneficiary actually arrived and was treated. Also, using billing trend reports, New Jersey audited pharmacies with abnormally large numbers of claims for a newly covered high-priced drug, and then audited the pharmacies’ purchases from wholesalers, thus discovering that these pharmacies were billing for a larger amount of this drug than had been shipped to them. Beneficiaries can also play a role in helping state Medicaid agencies detect improper payments. Forty-two states reported having hotlines that beneficiaries could use to report suspected improprieties. Fourteen states reported alerting beneficiaries to certain types of fraudulent schemes. Twenty-seven reported taking other types of actions. For example, some states commented that they mail explanation-of-benefit statements to beneficiaries to increase awareness of the services being billed in their names, so that if beneficiaries are not receiving billed services, they will be able to inform the state. State Medicaid agencies are primarily responsible for conducting program integrity activities, but they share this responsibility with other agencies. For example, they are required to refer potential fraud cases to the MFCUs for investigation and prosecution. Cases that may involve improper billing of Medicare or private insurers as well as Medicaid may also require investigation by the OIG or the FBI, and may involve prosecution by DOJ. In addition, other state agencies, such as those responsible for licensure, can become involved in an investigative effort. Federal regulations require Medicaid agencies and MFCUs to have an agreement to cooperate; however, the actual level of cooperation between state Medicaid agencies and MFCUs varies. State Medicaid agencies are required to refer suspected fraud cases to MFCUs for investigation and possible prosecution, provide needed records to the MFCUs, and enter into a Memorandum of Understanding establishing procedures for sharing information and referring cases. In our survey, MFCUs generally reported that about one-third of the cases that they open are referred by their state Medicaid agency. The most common criterion reported by state agencies for referring cases to MFCUs was a belief of an intent to commit an impropriety on the part of a provider. The number of cases state agencies reported referring in their previous fiscal year varied substantially. This is not surprising because Medicaid agencies differ in size, organization, scope of services, and beneficiary eligibility requirements. They also operate in different states, each of which has its own legal system and business climate, differences that can affect the number and quality of fraud referrals made by the state agency. In addition to differences in referral patterns, the reported level of interaction between states’ Medicaid agencies and MFCUs also varied. For example, meetings between the two organizations to discuss pending cases are important for preventing agency actions that could compromise a fraud unit investigation or for alerting MFCU officials to cases the state agency is developing. Most state Medicaid agencies reported having joint meetings at least six times a year; however, eight states reported that they conduct such meetings only one to three times each year. New Jersey is a state where the Medicaid agency and MFCU have worked together to further each agency’s efforts through close cooperation. Medicaid agency staffers are sometimes detailed to the MFCU to continue working cases they have developed. The state agency and MFCU hold joint meetings monthly to discuss developing cases, case progress, and to plan strategies for investigations, prosecutions, and administrative actions. The MFCU tries to use search warrants and other methods to gather evidence in suspected fraud cases so that information can be shared with the Medicaid agency. This is in contrast to the use of another MFCU tool—grand jury investigations—which have secrecy rules to prevent disclosure of evidence. This level of cooperation allows the state Medicaid agency to take immediate administrative action to stop improper payments without disrupting criminal case development. The MFCU also works to have defendants who are pleading guilty sign a consent order debarring or disqualifying them from participating in Medicaid, eliminating the need for state agency debarment or disqualification proceedings. In contrast to New Jersey, in another state, the director of an MFCU reported to us that MFCU investigators were denied access to state Medicaid agency meetings, which made it more difficult for both agencies to develop potential fraud cases. State Medicaid and MFCU officials told us that close collaboration among state agencies or state and federal law enforcement agencies was particularly important for certain types of cases. In the handful of states whose MFCUs lack authority to serve warrants or prosecute cases, MFCUs must work with other agencies to ensure that these activities take place. When dealing with individuals whose fraudulent or abusive activities cross state lines, one MFCU may need to work with other states’ agencies or with federal officials. Some cases involve efforts to defraud both Medicare and Medicaid, which can require an MFCU to work with the OIG or FBI. Such interagency collaboration has been fostered by the HCFAC program, which has increased funding for federal health care law enforcement efforts. Implementing section 407 of the Ticket to Work and Work Incentives Improvement Act of 1999, which authorized MFCUs to address cases that involve Medicare as well as Medicaid fraud, will also likely necessitate enhanced cooperation between MFCUs and federal law enforcement officials. Nearly all MFCUs responding to our survey reported that they have conducted joint investigations with other organizations in the last 3 years. Most commonly, this involved conducting joint investigations with their state agency, state licensing boards, the OIG, FBI, or a federal task force. Cooperative efforts have led to joint prosecutions. Twenty-seven states reported jointly prosecuting criminal cases with federal attorneys in the previous 3 years—about half doing so at least four times. Such cooperation can augment state officials’ activities. This was demonstrated in California, where members of a task force created by the FBI, the U.S. Attorney’s office, the California State Controller’s office, the Attorney General’s office, and the state Department of Health uncovered numerous fraud and abuse cases in the Los Angeles area. The Controller’s staff audited suppliers and referred to the FBI those with insufficient inventories or purchase records to substantiate claims volume. The FBI investigated further and made referrals to the U.S. Attorney. Meanwhile, the governor created a fraud prevention bureau within the state agency that worked closely with on-site FBI agents to investigate provider operations. Once a case was developed, the FBI referred it to the MFCU and U.S. Attorney’s office for prosecution. During our review, we found that several states—including Georgia, New Jersey, North Carolina, and Texas—have enacted stricter rules or restructured operations to better ensure the integrity of their Medicaid programs. A few examples of their accomplishments follow. Legislative changes: Some states are enacting health-care-specific criminal and civil legislation—often modeled after federal law. With these statutes, prosecutors no longer must develop cases based on more generic mail fraud, racketeering, theft, or conspiracy statutes. For example, New Jersey enacted the Health Care Claims Fraud Act, which creates the specific crime of health care claims fraud and provides for 10-year prison sentences, fines of up to five times the amount gained through fraud, and professional license revocation. Meanwhile, civil statutes—such as one enacted in North Carolina and other states authorizing action against providers who “knowingly” submit false Medicaid claims for payment— allow prosecutors to take advantage of less stringent evidentiary requirements than those required by criminal statutes. Restructuring operations: Some states are enhancing their program safeguard operations through restructuring. Texas created an Office of Investigations and Enforcement in 1997 within the state Medicaid agency, giving it power to take administrative actions against providers. These actions cannot be appealed when the Office has tangible evidence of potential fraud, abuse, or waste. It also can impose sanctions and recover improper payments. Meanwhile, Georgia established an MFCU in 1995 that differs from most in that it includes auditors from the state Department of Audits, investigators from the state Bureau of Investigation, and prosecutors from the state Attorney General’s office. They work together as a discreet entity under memoranda of understanding signed by the three agencies. HCFA and the OIG—the agencies that are responsible for the Medicaid program at the federal level—are taking steps to promote effective Medicaid program integrity by providing technical help to the states to facilitate states’ efforts. These federal agencies also conduct some information gathering on state activities in order to guide state efforts. Many state agency and MFCU officials reported that their agencies had benefited greatly from federal technical assistance, guidance, and training, and would welcome more assistance. In 1997, HCFA began a new approach as a facilitator, enabler, and catalyst of states’ program integrity efforts. To do so, HCFA established the National Medicaid Fraud and Abuse Initiative, led by staff from HCFA’s southern consortium and headquarters, with designated, part-time coordinators for the Initiative in each of HCFA’s 10 regional offices. The strategy for the Initiative was to partner with the states and have state representatives work with HCFA staff to set the agenda and goals for the effort. The Initiative provides networking, information sharing, and training opportunities for state agencies and their program integrity partners. Participants in early Initiative meetings identified 10 major focus areas— including payment accuracy measurement, managed care, and information technology. Workgroups are developing recommendations in each area. The Initiative also includes the Medicaid Fraud and Abuse Control Technical Advisory Group, consisting of HCFA and state officials, which serves as an ongoing forum for sharing issues, solutions, resources, and expertise among states; advising HCFA on policies, procedures, and program development; and making recommendations on federal policy and legislative changes. The Initiative has resulted in several tangible products and events, including a fraud statute Web site, managed care guidelines, seminars on innovations and obstacles in safeguarding Medicaid, and a technology conference. These efforts are described in table 3. State Medicaid officials that we spoke with reported that Initiative activities are helping them with their program safeguard efforts by providing important networking, information sharing, and training opportunities. Our survey results indicated that staff from 41 state Medicaid agencies attended Initiative-sponsored training last year, and more than 40 percent of state agencies had staff serve on Initiative panels. In fact, nearly 75 percent of state Medicaid agency survey respondents would like more of the types of assistance HCFA has been providing, including additional training; technical assistance on use of technology; guidance on managed care fraud detection and prevention; and information on innovative practices in other states. According to HCFA and some state officials, this approach has been more effective than previous efforts to guide state activities. Prior to 1997, HCFA reviewed information systems—including state SURS unit activities— through formal “systems performance reviews” of program controls. These controls included those related to payment and program safeguard activities. HCFA could impose penalties on states that failed these reviews, and some HCFA and state officials told us that states found the reviews burdensome. Section 4753 of the Balanced Budget Act of 1997 repealed HCFA’s authority to conduct such reviews. State and federal officials agree that federal attention to state program protection efforts declined after these mandatory reviews were eliminated. HCFA officials told us that staff in HCFA’s regional offices continued to provide some oversight of state efforts, but not in a coordinated way. However, without a regular review of state activities to address improper payments, HCFA staff had little information with which to guide states where more effective efforts were needed. To get a more comprehensive and systematic view of state antifraud efforts, the regional Initiative coordinators conducted structured site reviews of certain program safeguards in eight states in fiscal year 2000. These reviews examined how state Medicaid agencies identify and address potential fraud or abuse, whether state agencies are complying with appropriate laws and regulations—such as how they check to ensure that only qualified providers participate in the program—and potential areas for improvement. Reviews in another eight states are being conducted in fiscal year 2001. However, these reviews, as with all of HCFA’s Initiative endeavors, focus only on state efforts to address potential fraud and abuse; they do not address all of the ways states may be trying to prevent or detect improper payments, and whether these efforts could be improved. The OIG initially certifies, and each year recertifies, that MFCUs are complying with federal requirements and are eligible for federal funding. The OIG determines whether an MCFU should be recertified primarily based on reports the MFCUs submit on their activities. The OIG assesses these reports to determine whether each unit has used federal funds effectively and has met a set of 12 performance standards. These standards, which the OIG developed in conjunction with the National Association of Medicaid Fraud Control Units, cover areas such as staffing, training, types of cases (whether they constitute potential fraud or physical abuse of beneficiaries), case flow, and monitoring of case outcomes. For example, in the area of staffing, the OIG checks whether an MFCU has the minimum number of staff required. This includes at least one attorney experienced in investigating criminal cases or civil fraud, one experienced auditor capable of supervising financial records reviews and assisting in fraud investigations, and one senior investigator with substantial experience in conducting and supervising criminal investigations. The OIG may also conduct site visits to observe MFCU operations or provide guidance. Eight MFCUs received such visits in fiscal year 1999. OIG officials said they rarely decertified MFCUs. If decertified, an MFCU can reapply for federal certification when officials believe it will meet the required standards. Such is the case with the District of Columbia’s MFCU, which was decertified in 1983 for “lack of productivity.” It was recertified in 2000. The MFCUs generally reported being satisfied with OIG oversight and guidance, but indicated several areas where the OIG could provide more assistance—especially by providing more training. More than 45 percent of MFCUs reported that their staff attended OIG-sponsored training in the past fiscal year. MFCUs also would like the OIG to do the following. MFCU officials wanted the OIG to provide more training and assistance in their new authority to address cases that involve both Medicare and Medicaid fraud. Survey respondents were particularly interested in learning more about Medicare program rules, how Medicare claims processing contractors operate, and recent Medicare fraud schemes. They also wanted help in working with HCFA and Medicare claims processing contractors to get timely, online access to Medicare claims data. The OIG has begun to provide training on Medicare related issues. MFCUs would like the OIG to increase the number of OIG staff in regions and local areas to increase their participation in joint investigations. Medicaid remains vulnerable to payment error and, while most states are taking steps to address their programs’ vulnerabilities, their efforts are uneven. Some states have worked diligently to prevent or detect improper payments, while others have not been as proactive. The federal government has provided technical assistance and a forum for information exchange for the states, as well as some guidance. Given that states are responsible for administering Medicaid and investigating and prosecuting any fraudulent activities, states must set their own course to ensure the integrity of their Medicaid programs. But the federal government has a responsibility to actively partner with states to ensure that they succeed. In recent years, HCFA and other federal investigative organizations have played a more active role as partners in this endeavor. We provided draft copies of this report to HHS for comment. HHS officials provided written comments (see appendix III). We also provided excerpts from the draft report that dealt with state activities to states that we had visited. The reviewing officials suggested some technical corrections, which we incorporated into the report where appropriate. In its written comments, HHS provided information on the Department’s most recent efforts to prevent improper payments and to combat fraud and abuse in the Medicaid program. Among other activities, these efforts include a resource guide for states, a summary report of the joint HHS-DOJ technology conference, and a data exchange project between Medicaid and Medicare. HHS highlighted efforts to review program integrity activities in states and indicated that it intends to broaden the scope of the review in future fiscal years. Both the OIG and HCFA have developed training for state officials, including training for MFCU officials on Medicare. Finally, HHS reported that it has established a Web site at www.hcfa.gov/medicaid/fraud to provide states with additional technical assistance and guidance in their efforts to prevent and detect improper payments and to address fraud and abuse. As agreed with your office, unless you announce this report’s contents earlier, we plan no further distribution until 30 days after the issue date. We will then send copies to the Honorable Tommy G. Thompson, Secretary of HHS; the Honorable Thomas Scully, Administrator of HCFA; Mr. Michael Mangano, Acting Inspector General; and other interested parties. We will make copies available to others upon request. If you or your staff have any questions about this report, please call me at (312) 220-7600 or Sheila K. Avruch at (202) 512-7277. Other major contributors to this report were Barrett Bader, Bonnie Brown, Joel Grossman, and Elsie Picyk. As we developed our work on this report, we focused on the risk of improper Medicaid fee-for-service payments, states’ efforts to address improper payments—including efforts to investigate and prosecute fraud— and the guidance and oversight the states are receiving from federal oversight agencies. To do this work, we used information from our surveys, state visits, interviews, and analyses of agency program integrity documents and literature. To address the risk of improper fee-for-service payments, we reviewed studies that Illinois, Kansas, and Texas have conducted to measure payment accuracy in their Medicaid programs, and we interviewed state officials on the studies’ methodologies, findings, and limitations. To gain information on the types of improper billing schemes and other types of fraud cases, we interviewed state officials and reviewed state and HCFA documents. We also used results from our state survey, described below. To find out about state activities and federal oversight from the states’ perspective, we analyzed the results of surveys we sent to the 56 state Medicaid agencies and the 47 federally certified MFCUs then in existence. Fifty-three of the 56 state Medicaid agencies and 46 of the 47 MFCUs responded to our surveys. An additional MFCU in the District of Columbia, which had been decertified in 1983, was recertified in March 2000 after we sent out our survey. To facilitate their answering our questionnaire, we asked respondents, in several questions on the surveys, to base their answers on data from their most recently completed fiscal years, whether state or federal. (See appendix II for copies of our questionnaires and results.) To supplement the survey analyses, we visited state Medicaid programs and MFCUs in four states: Georgia, New Jersey, Texas, and Washington. We chose these states to provide regional diversity, and because they were among the ones considered by federal officials to be particularly active in efforts to identify and respond to improper payment practices—either through the use of new technology or by other means. Also, by telephone, we interviewed Medicaid, MFCU, and state government officials in other states that have taken steps to strengthen their Medicaid program integrity efforts. To better understand efforts to control improper payments at the national level, we interviewed officials at HCFA’s Central Office and leaders of the agency’s National Medicaid Fraud and Abuse Initiative in HCFA’s Atlanta and Dallas regional offices, as well as officials at the OIG. To gain a more broad-based perspective on other joint agency investigations and prosecutions, we interviewed representatives of the FBI, the U.S. Attorneys office, and the Civil and Criminal Divisions of DOJ. In addition, we participated in several meetings on control of improper payments, including fraud, which were sponsored by HCFA and others. Finally, we interviewed representatives of provider and supplier groups and technology companies that have developed software that is useful in the detection of improper payments. In addition, we reviewed literature on health care fraud and abuse, including studies by the OIG, HCFA, and others. We performed our work from September 1999 through April 2001 in accordance with generally accepted government auditing standards. Major Management Challenges and Program Risks: Department of Health and Human Services (GAO-01-247, Jan., 2001). National Practitioner Data Bank: Major Improvements Are Needed to Enhance Data Bank’s Reliability (GAO-01-130, Nov. 17, 2000). Medicaid: State Financing Schemes Again Drive Up Federal Payments (GAO/T-HEHS-00-193, Sept. 6, 2000). Financial Management: Improper Payments Reported in Fiscal Year 1999 Financial Statements (GAO/AIMD-00-261-R, July 27, 2000). Medicaid: HCFA and States Could Work Together to Better Ensure the Integrity of Providers (GAO/T-HEHS-00-159, July 18, 2000). Medicaid In Schools: Improper Payments Demand Improvements in HCFA Oversight (GAO/HEHS/OSI-00-69, Apr. 5, 2000). Medicaid: Federal and State Leadership Needed to Control Fraud and Abuse (GAO/T-HEHS-00-30, Nov. 9, 1999). Financial Management: Increased Attention Needed to Prevent Billions in Improper Payments (GAO/AIMD-00-10, Oct. 29, 1999). The first copy of each GAO report is free. Additional copies of reports are $2 each. A check or money order should be made out to the Superintendent of Documents. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. Orders by mail: U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Orders by visiting: Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders by phone: (202) 512-6000 fax: (202) 512-6061 TDD (202) 512-2537 Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. Web site: http://www.gao.gov/fraudnet/fraudnet.htm e-mail: fraudnet@gao.gov 1-800-424-5454 (automated answering system)
State Medicaid programs make a wide variety of payments to individuals, institutions, and managed health care plans for services provided to beneficiaries whose eligibility status may fluctuate because of changes in income. Because of the size and the nature of the program, Medicaid is potentially at risk for billions of dollars in improper payments. The exact amount is unknown because few states measure the overall accuracy of their payments. Some improper Medicaid payments by states are the result of fraud by billers or program participants, but such improper payments are hard to measure because of the covert nature of fraud. Efforts by state Medicaid programs to address improper payments are modestly and unevenly funded. Half of the states spend no more than 1/10th of one percent of program expenditures to safeguard program payments. States also differ in how they help prevent improper payments as well as the degree to which they coordinate their investigations and prosecutions of fraud. Federal guidance to the states relies largely on technical assistance. The Health Care Financing Administration has recently taken a more active role to facilitate states' efforts and provide a national forum to share information.
DOT and its administrations interact with all levels of government and the private sector. When Congress created DOT in 1966, it combined several existing federal transportation organizations with responsibility over aviation, waterways, railroads, and highways. As shown in figure 1, Congress has taken action over the past 50 years to create and dissolve administrations within DOT, and transfer some responsibilities to different administrations or federal agencies. Despite changes like these over the years, DOT’s organizational structure and activities have remained largely focused on transportation mode. DOT currently consists of nine modal administrations, four of which were established at the inception of the department. DOT is made up of nine modal administrations and OST, each of which has its own mission—primarily focused on enhancing mobility and safety. OST is responsible: (1) for coordinating and overseeing the activities of DOT’s modal administrations; (2) for promoting intermodal transportation—in which multiple modes of transportation are used to move people or goods; (3) for formulating national transportation policy; (4) for negotiating and implementing international transportation agreements; and (5) for awarding multi-modal transportation grants, among other responsibilities. While OST is responsible for overseeing the modal administrations, each individual administration is headed by a political appointee and has its own missions, goals, and responsibilities, which are achieved through varying activities: Federal Aviation Administration (FAA): FAA is responsible for overseeing the safety of civil aviation through the issuance and enforcement of regulations and standards related to (1) the manufacture, operation, certification, and maintenance of aircraft; (2) the certification of the aviation workforce; and (3) the maintenance and operations of airports. FAA also enforces hazardous material regulations for shipments by air, regulates the launch and reentry operations of commercial space-transportation companies, administers aviation-related grant programs, and operates a network of airport traffic-control towers, air-route traffic-control centers, and flight service stations. Federal Highway Administration (FHWA): FHWA is responsible for coordinating highway transportation programs in cooperation with states and other partners through the Federal-Aid Highway Program, which provides federal financial assistance to states to construct and improve highways, roads, and bridges, and to improve the safety of public roads. FHWA also provides services through the Federal Lands Highway Program to improve access to public lands and manages a research, development, and technology program. Federal Motor Carrier Safety Administration (FMCSA): FMCSA is responsible for enforcing safety and hazardous materials regulations on commercial motor vehicles (e.g., trucks for moving freight and household goods, and buses); improving commercial motor vehicle technologies and safety information systems; and increasing awareness of the importance of safely operating commercial motor vehicles. FMCSA also provides grants to state and local government agencies in a variety of areas, including for improving the safe operation of commercial motor vehicles, commercial drivers-licensing programs, and overseeing newly registered motor carriers. Federal Railroad Administration (FRA): FRA is responsible for developing and monitoring railroad compliance with federally mandated safety standards on track maintenance, inspection standards, and operating practices. FRA also administers federal grant funds for passenger and freight rail infrastructure and services (including Amtrak), safety improvements, and congestion relief programs. In addition, FRA conducts research and development tests on projects to improve safe rail transportation, investigates rail accidents, provides training to and collaborates with the rail industry, and promotes public education campaigns on highway-rail grade crossing safety and the dangers associated with trespassing on rail property. Federal Transit Administration (FTA): FTA is responsible for promoting the development, improvement, and safety of public transportation systems, which include buses, rail, trolleys, and ferries, through a variety of federal grant programs to local transit agencies. FTA oversees these grants and evaluates whether grantees adhere to federal standards. FTA also oversees safety measures and helps develop next-generation technology research. Historically, FTA has not directly overseen the safety of transit systems, but was granted additional safety authorities under several recent surface transportation authorizations, including the ability to temporarily take over for an inadequate or incapable state-safety oversight agency. FTA exercised this authority by taking temporary responsibility for safety oversight of the Washington Metropolitan Area Transit Authority in October 2015. Maritime Administration (MARAD): MARAD is responsible for promoting the development and maintenance of the United States merchant marine, which is sufficient to carry the nation’s domestic waterborne commerce and may be called to serve as naval and military auxiliary in times of war or national emergency. As part of this responsibility, MARAD funds and operates the United States Merchant Marine Academy and provides funding to six state maritime academies. MARAD is also responsible for ensuring the United States maintains shipbuilding and repair service capabilities, efficient ports, effective intermodal water and land transportation systems, and reserve shipping capacity in times of national emergency. National Highway Traffic Safety Administration (NHTSA): NHTSA is responsible for setting and enforcing safety performance standards for motor vehicles and equipment and providing grants to state and local governments for conducting local highway safety programs. NHTSA also investigates safety defects in motor vehicles, sets and enforces fuel economy standards, helps states and local communities address impaired driving, promotes the use of safety technologies, and conducts research on driver behavior and traffic safety, among other activities. In addition, NHTSA promotes the use of safety belts, child safety seats, and motor cycle helmets; establishes and enforces vehicle anti-theft system regulations; and provides consumer information on motor vehicle safety. Pipeline and Hazardous Materials Safety Administration (PHMSA): PHMSA is responsible for overseeing the safe transportation of oil, gas, and other hazardous materials by all transportation modes, including pipelines, through the development and enforcement of regulations and standards, education, research, and assistance to the emergency response community. PHMSA also oversees the safety of the nation’s oil and gas pipeline network by inspecting pipelines, collecting and analyzing data, and investigating accidents to identify potential safety improvements. St. Lawrence Seaway Development Corporation (SLSDC): SLSDC is a wholly owned government corporation within DOT that is responsible for working with the Canadian St. Lawrence Seaway Management Corporation to oversee operations for commercial and noncommercial vessels on the Great Lakes and the St. Lawrence Seaway. SLSDC coordinates with Canadian authorities on operational issues such as traffic management, navigation aids, safety, and environmental programs. SLSDC and Canadian authorities also work on trade development opportunities between port communities, shippers, and receivers. While DOT is responsible for conducting a wide range of activities, many of DOT’s program requirements are established in statute. For example, FRA, FHWA, and PHMSA, among other administrations, are responsible for enforcing statutory safety standards. In addition, DOT’s modal administrations administer both discretionary and formula grant programs, both of which must be authorized in statute. The majority of DOT’s funding is provided from transportation related taxes and user fees, which are collected for specific purposes. The Highway Trust Fund (HTF), which was established by Congress in 1956, collects motor fuel and truck-related taxes for use on highway and mass transit programs. For example, Congress appropriates funds from the HTF to FHWA to distribute to states for construction, reconstruction, and improvement of highways and bridges. The Airport and Airway Trust Fund (AATF), which was established by Congress in 1970, collects airline ticket and aviation fuel taxes for use on airport and airway system programs administered by FAA. Congress appropriates funds from the AATF to FAA for use on technological improvements to the air traffic control system, research on issues related to aviation safety, grants for airport planning and development, and the operation of the air traffic control system. DOT also receives funds from the U.S. General Fund through the annual appropriations cycle. DOT’s administrations have similar missions and responsibilities and, therefore, perform similar activities related to helping the Department meet its overall mission of ensuring a fast, safe, efficient, accessible, and convenient transportation system. We have found that it is important to identify instances when multiple federal agencies or programs engage in similar activities in order to determine whether opportunities exist to reduce, eliminate, or better manage fragmentation, overlap, or duplication. We identified a number of areas in which DOT performs activities and determined that multiple administrations perform activities in each area. Broadly, these areas fall into six functional categories: administrative, economic development and consumer protection, operating transportation systems, research, safety, and supporting infrastructure projects (see table 1). Multiple DOT administrations perform similar activities in each of the areas we identified. For example, FAA, FHWA, FRA, FMCSA, FTA, MARAD, NHTSA, PHMSA, and OST develop a variety of safety regulations for airports and airlines, railroads, pipeline construction, and the transportation of hazardous materials, among others. In addition, five modal administrations (FAA, FHWA, FRA, FTA, and MARAD) and OST support infrastructure projects by conducting grant-making activities for airport planning and development, highway projects, and multi-modal projects that cut across DOT administrations, among others. A different group of five modal administrations (FAA, FMCSA, FRA, MARAD, and NHTSA) and OST conduct activities related to overseeing economic and consumer regulations, including enforcing shipping laws designed to ensure federal projects use U.S.-flagged vessels, fuel-economy standards for automakers, and regulations governing household goods movers. While we identified similar activities performed by multiple DOT administrations, we determined that there were important reasons why similar activities may be appropriate or necessary. Specifically, many of these activities have different purposes, including achieving different goals, serving different recipients, and meeting different statutory requirements. For example, DOT performs a number of economic and consumer-protection activities designed to achieve different goals and outcomes, including supporting the U.S. shipping industry, protecting the public from household goods movers that mislead or cheat them, and improving the efficiency of motor vehicles. Additionally, DOT carries out safety oversight of a number of different types of transportation operators including airlines, motor carriers, pipelines, railroads and public-transit operators, but the safety requirements and expertise necessary for determining whether these types of operators are adhering to federal regulations and operating safely can be very different. Further, DOT carries out a number of different project-grant and credit programs specified in statute with different requirements and conducts a number of programs that receive funding from different sources such as the HTF, the AATF, and general appropriations. We also identified some general management activities conducted by multiple DOT administrations that have similar goals or intended recipients. Specifically, each DOT modal administration conducts administrative activities in the areas of information technology, human capital, and financial management, all of which have similar goals, strategies, and beneficiaries. For example, each modal administration and OST carry out human capital activities related to recruitment, hiring, benefits, payroll, security assessments, and employee appraisal and retention. Each administration also conducts some information technology activities, such as hardware and software acquisitions, maintaining networks, and troubleshooting. Finally, each administration also performs financial management activities such as funds disbursement, auditing, and reconciliation. According to department officials, DOT has taken steps to better leverage these similar administrative activities across modal administrations, including adopting shared services, where services that multiple administrations need are consolidated within a smaller number of administrations. For example, DOT operates a single financial center in Oklahoma City, Oklahoma, to provide financial management services such as accounting and transaction processing for each DOT administration as well as for additional federal agencies. Additionally, DOT has consolidated a number of human resources functions into a single division within FHWA, which posts vacancy announcements and collects employment applications, among other things. DOT officials told us that consolidating these services has improved operational efficiency and purchasing power. For example, officials from one of DOT’s smaller modal administrations told us that they are able to take advantage of DOT’s purchasing power to obtain better rates when purchasing information technology hardware. DOT has a variety of methods to coordinate similar activities and leverage resources and knowledge. Some of these coordination efforts are focused on individual projects that involve more than one administration, and others are focused on broad topics, such as safety, in which all administrations play a role. DOT has established some of these coordination methods administratively. Others have been mandated by law, such as the Fixing America’s Surface Transportation Act (FAST Act) of 2015, which mandated changes to several areas of DOT operations including research, safety, and environmental reviews. Current and former DOT officials, state and local transportation officials, representatives from private industry, and former congressional staff we interviewed, described several coordination and collaboration methods used by DOT and its administrations, including: Formal Coordinating Bodies: DOT has established a variety of types of formal coordinating bodies including councils that bring together staff from multiple administrations, centralized offices within OST that have decision-making authority for DOT activities, and cross-administration teams to handle individual multi-modal projects. For example, OST convenes a formal safety council intended to improve communication on safety-related issues. This council serves as a forum for executives at each modal administration to discuss emerging safety issues and coordinate responses. Additionally, recently enacted legislation authorizing surface transportation programs established a single office within OST—called the Build America Bureau—to act as a point of contact and coordination for entities seeking to use several DOT credit programs. In an example of a cross-administration team, FMCSA and NHTSA established a project-specific team with staff from both administrations to produce a joint, proposed rulemaking on speed-limiting devices for large commercial vehicles. The administrations’ proposed complementary rules in September 2016 cover the areas of operation over which each has jurisdiction: NHTSA’s rule would require manufacturers to install the speed-limiting devices on new large commercial vehicles, and FMCSA’s rule would require interstate motor carriers to use and maintain the devices. DOT officials told us that participation in these coordinating bodies and establishing cross-administration teams allows the modal administrations to utilize expertise from across DOT and has resulted in more consistent responses to issues. DOT officials and experts we spoke with also told us that operating coordinated multi-modal programs with centralized authority has allowed DOT to more efficiently provide services and assistance to a variety of projects using a consistent set of program rules and requirements. Coordinated Processes: DOT has coordinated processes for its administrations to use in some areas such as developing regulations and approving infrastructure projects. For example, DOT has an order containing general provisions for the environmental review process at all the modal administrations, and these requirements state that DOT should, where possible, coordinate reviews into a single process. Additionally, the FAST Act required DOT to apply existing environmental review processes to certain FRA projects. DOT officials told us that they are currently working on rulemaking and guidance for the environmental review process as it applies to FRA. DOT officials told us that standardized processes can avoid duplicative work for projects requiring the approval of more than one DOT administration, which can shorten the length of the environmental review process and improve DOT’s efficiency. Informal Coordination: DOT officials told us that while DOT has a number of formal coordination mechanisms, much of the coordination done by OST and the modal administrations occurs informally. Officials said that this type of coordination is primarily relationship- driven and can take many forms including verbal information requests, document sharing, or other methods. For example, FHWA officials noted instances when staff informally shared information on methods for conducting assessments of highway and railroad bridges with FRA, whose inspection program began more recently. DOT officials also said that they frequently use informal coordination to leverage DOT’s multi-modal expertise when working on smaller projects that might not be large enough to merit a formal cross-administration team. DOT officials told us that informal coordination is flexible and easy to conduct, and enhances communication across the modal administrations. According to the experts in transportation and organizational change we met with, DOT could make operational improvements, but does not need to implement organizational changes to efficiently and effectively carry out its missions. Specifically, a majority of these experts told us that the potential benefits of implementing a large-scale change in DOT’s organizational structure—such as reorganizing the modal administrations or restructuring the department to focus less on transportation modes— would probably not outweigh the costs of implementing these changes. However, experts identified several areas in which they believe DOT could make operational improvements to help the department more efficiently and effectively carry out its missions: (1) collaboration and coordination; (2) data quality and analytics; (3) regulation development; (4) project delivery processes; and (5) addressing emerging issues. Both we and the DOT OIG have repeatedly reported on challenges that DOT’s individual modal administrations face in these areas. (See app. IV for a list of relevant GAO and DOT OIG reports in each of these areas.) Expert opinion “…there are definitely some operations that could be improved… there are opportunities for greater collaboration… especially in areas that are clearly intermodal or multi-modal, some form of… councils or other operating bodies that work across the modes… seems like an easy, maybe even a non-legislative sort of a fix that could really make a difference in certain areas.” Collaboration and coordination: Efforts to support transportation projects and address concerns—such as driver or operator fatigue—often benefit from collaboration among DOT modal administrations, other federal agencies, state and local stakeholders, and private industry. Experts we spoke to stated that DOT should improve collaboration and coordination efforts with these internal and external groups—efforts that neither require organizational or regulatory changes. We and the DOT OIG also have bodies of work suggesting ways to improve how DOT collaborates and coordinates. For example, we recommended in 2012 that certain DOT modal administrations improve collaboration and communication activities designed to help state and local governments use intelligent-transportation system technologies to mitigate traffic congestion. Many of the concerns DOT is working to address impact multiple modes of transportation and experts noted that there are opportunities for DOT to improve how it coordinates internally across its modal administrations. For example, experts discussed several of DOT’s ongoing internal collaborative groups and noted that some groups could have been more effective if they consistently included senior-level officials to provide needed leadership and decision-making authority. Experts, as well as DOT officials, also believed that DOT could more effectively leverage existing expertise across administrations, such as in the area of safety, and ensure that all affected modal administrations are represented when discussing cross-modal issues. DOT officials we spoke with agreed that ongoing collaboration and coordination is critical and noted that the use of some internal coordination tools, such as crosscutting councils are helpful, but could be more effective. For example, officials indicated that coordination often occurs more informally, such as through individual relationships which can result in some officials not always being aware of the collaboration efforts that are occurring outside of their own modal administration. Further, officials noted that the strength and efficacy of any DOT-wide or administration-level initiative is dependent on the leadership and often the Secretary’s agenda. Experts also discussed opportunities for DOT to improve how it coordinates externally with state and local governments and other federal agencies. Specifically, given the increase in projects that include multiple modes of transportation, some experts noted that better collaboration with state and local government agency partners is needed to provide consistent information and help facilitate project development and implementation. DOT officials we spoke with said a variety of methods are used to coordinate its activities with state and local agencies to help achieve DOT’s missions, including the use of standard processes for developing regulations and approving infrastructure projects. Lastly, experts discussed how decisions other agencies make can impact DOT and suggested there are opportunities for DOT to improve how it coordinates externally. Experts noted that strategies for addressing issues such as climate change are being discussed by several different agencies and that DOT could more effectively use existing interagency offices or positions to ensure it is part of the discussions. We have found that federal agencies have used a variety of mechanisms to implement interagency collaborative efforts, such as establishing interagency task forces. Expert opinion “We believe the data has value. We have this enormous capability to generate so much data now. So the question really is… to be more precise as to what data we want to keep, what data we want to collect, and what questions we want to answer.” Data quality and analytics: DOT collects and uses data to carry out most of its activities, including developing safety regulations, identifying emerging safety issues, and conducting oversight. For example, FMCSA uses inspection data to conduct oversight on specific motor carriers, and NHTSA uses crash data to identify and implement policies to address issues such as pedestrian and bicycle safety. As such, there may be unintended consequences of not using data effectively. Experts we spoke with raised concerns about the accuracy of transportation data and believe there are opportunities within DOT’s current organizational structure to do a better job collecting complete, relevant, consistent, and reliable transportation data, which DOT and stakeholders need to make decisions. Experts also discussed the need to focus on prioritizing the data DOT collects to ensure they are of high quality and can be used to answer specific transportation-related questions. We and the DOT OIG have issued a number of reports expressing concern with the quality of DOT’s data and how they are used. For example, in 2012, we identified limitations in how FMCSA was using data to target new applicants suspected of fraudulent activity for further investigation and recommended FMCSA develop a data-driven tool. DOT officials from the modal administrations agreed that improving data quality is important and would allow DOT to leverage limited resources for identifying new and emerging safety issues. The officials described several ongoing data initiatives, such as a transportation data forum within DOT and efforts to streamline existing data systems. However, officials also noted challenges with addressing data quality, in part due to the number of stakeholders involved in and responsible for transportation-related data collection, including local and state officials and private entities. For example, DOT officials noted that the responsibility of data collection often falls to local and state officials who may not have the necessary expertise to accurately report certain types of safety events. DOT officials also cited challenges in establishing common definitions and measures for collecting and using the data, and dealing with large volumes of data that often come from numerous sources. Lastly, DOT officials noted that statutes and regulations may also constrain DOT from collecting certain types of data to support its mission. Experts also emphasized the importance of having improved analytic capabilities to ensure these data are used effectively. In particular, experts noted that DOT could be a leader in providing analytical tools to state and local government agencies that do not always have the necessary expertise or resources to conduct data-driven evaluations. Along those lines, we recently recommended, for example, that DOT should identify appropriate freight data sources, information, and analytic tools for transportation modes involved in the freight network and supply chains. DOT officials agreed that data analytics are important, and noted that a number of modal administrations have specific departments or programs designed to maintain and analyze data on transportation incidents and on federal inspection and enforcement actions. DOT officials also noted that despite resource constraints, DOT has prioritized the collection, maintenance, and management of data for several grant programs. Regulation development: Annually, DOT undertakes around one- hundred rulemakings—some of which, according to DOT officials, have become more complex and technical in recent years—that range from vehicle-to-vehicle communication safety standards (by NHTSA) to entry- level commercial driver training (by FMCSA) to underground storage facilities for natural gas (by PHMSA). According to experts, DOT could evaluate and consider changes to how it develops regulations that do not require organizational changes to ensure that the department’s priorities are coordinated and addressed. For example, experts suggested that DOT consider methods for ensuring the timely review of rulemakings across the modal administrations and noted that seeking stakeholder input early in the regulation development process would save both time and money, as well as improve the quality of the regulation itself. Standards for internal control in the federal government state that federal agencies should review policies and procedures to determine their effectiveness in achieving their objectives and to determine if efforts— such as a regulation—are designed and implemented appropriately. These standards also state that relevant, reliable, and timely information should be used to make informed decisions. We also have found that some DOT rulemakings developed by individual modal administrations could benefit from additional data, and may not be completed in a timely manner. For example, in 2014, we found that, despite acknowledging the risks of federally unregulated pipelines, PHMSA had not taken timely action on a rulemaking for addressing this risk and recommended that PHMSA move forward with the rulemaking process it started in 2011. DOT officials agreed that changes in the regulation development process could offer a number of benefits. For example, according to FHWA officials, increased coordination during the rulemaking process could provide the affected modal administrations an opportunity to review documents and more time to offer comments. According to FHWA officials, this process could potentially reduce the number of revisions needed to address and incorporate internal comments received. Other DOT officials also noted that many of its rulemaking efforts have been successful and well coordinated across DOT, as well as with other stakeholders, including subject-matter experts and the private sector. Specifically, officials from FAA discussed a number of processes and tools that they use, including rulemaking advisory committees and councils, a data tool that prioritizes upcoming rulemaking efforts, and a comprehensive database that collects data from almost 200 sources across government and industry. Other administrations also use similar tools and several recent initiatives have offered officials from these administrations the opportunity to learn more about FAA’s rulemaking processes. Expert opinion “And the research under-pinning of policy [in rulemaking] is an absolute prerequisite to its implementation… there have to be sound economic cost-benefit studies that are not politically motivated.” Experts also noted the importance of using data to drive regulatory activity in a proactive manner, rather than conducting regulatory activity in reaction to current events, such as an oil spill or a railway accident. We have noted some of the challenges DOT modal administrations face in developing and issuing regulations. For example, we recently found that stakeholders in the commercial space industry have mixed opinions on what, if any, legislative or regulatory changes are appropriate to accommodate certain technologies. We have also noted that data limitations, uncertainties, and lack of transparency may contribute to a lack of confidence by important stakeholders in the implementation of a rulemaking. DOT officials stated that it is challenging to expeditiously move forward in the traditional regulatory process because of the established procedures built-in to allow appropriate time for consultation, public input, and coordinating across government stakeholders. We have also found that there are risks to implementing rules too quickly, especially when a rulemaking is controversial or technical. According to experts, developing regulations may be even more challenging when dealing with emerging issues and new technologies, such as automation within passenger and commercial vehicles (see discussion below). These technologies are developing rapidly, do not fit neatly within a single modal administration’s current regulatory framework, and may require additional coordination across administrations. One approach several DOT administrations, including FAA and PHMSA, are using to address these types of challenges is to rely on performance-based rules or consensus standards—as opposed to prescriptive rules that dictate a specific method for mitigating risk—for new regulations. According to FAA officials, such an approach offers the private sector the flexibility to address issues as they emerge, but also ensures safety is not compromised as new technologies are introduced. Expert opinion “So it's not a one-size fits all… there are places where uniformity makes sense. And maybe that should be elevated up outside of the modes. But in other places you need flexibility. And that should remain down within the modes so that they can be more responsive to their stakeholders and taking into consideration the impacts within that particular mode.” Project delivery processes: In the current fiscal environment, in which federal resources are scarce, it is critical that the processes DOT uses to annually distribute billions of dollars in federal transportation funds for projects are clear, efficient, and effective. According to the experts we spoke with, DOT could reduce barriers and challenges facing state and local governments in the project delivery processes (e.g., funding, financing, and environmental review) without organizational changes. Experts believe that project delivery processes could be streamlined and made more consistent across modal administrations to achieve cost and time savings for state and local agencies. Further, experts suggested creating a position within OST to help states and local agencies navigate through the federal processes. We and the DOT OIG have bodies of work on potential improvements to DOT’s project delivery processes within the modal administrations, including ways to help address deficiencies in adherence to key discretionary grant practices, strengthen processes for overseeing grants, and improve guidance designed to ensure the process for selecting grant awardees is consistently applied. For example, we recommended in 2011 that FRA do more to document grant awards decisions. DOT officials acknowledged that there are differences in project delivery processes between modal administrations, but noted that this is often the result of requirements in statute or regulation. For example, the “Buy America” provisions for FTA, FRA, and FHWA are specific to each administration. Officials also noted several provisions in recent acts that require DOT to streamline some project delivery processes, including the previously discussed Build America Bureau, which is to provide assistance and communicate best practices and financing and funding opportunities to grant programs. Congress has passed numerous provisions to accelerate the delivery of federal-aid highway and transit projects since 2005 by streamlining the environmental review process for state and local agencies, most recently in the FAST Act. According to DOT officials, DOT is working to implement these changes and is updating its department-wide guidance for conducting environmental reviews. Lastly, the officials noted that DOT recently created a centralized office within OST to be a resource for the modal administrations and help accelerate the delivery of all DOT projects. However, DOT officials cautioned that there may be unintended consequences associated with implementing the suggestion from experts to create a new position within OST, including adding a layer of bureaucracy that could create inefficiencies. Expert opinion “…the rapid pace of technology is impacting our traditional planning processes, in that, when we look out 25 years, generally, we see old technology embedded at 25 years, and we see population growth, we see pollution growth… but it's hard because the rules haven't kept up… and there's really no guidelines ...” Addressing emerging issues: The transportation world is quickly evolving and DOT has been and likely will continue to be challenged to proactively address emerging or anticipated issues to account for rapid technological advancements, climate change, and intermodal issues, among other concerns. Experts highlighted many of these challenges and were concerned that DOT was not prepared to address them. For example, experts frequently mentioned that DOT is falling behind the private sector’s need for research and specific regulations for autonomous vehicles and intelligent transportation systems. Experts also mentioned the importance of considering environment and climate change impacts of transportation in order to make wise decisions on how to move freight, for example. We and the DOT OIG have issued a number of reports on a range of emerging transportation issues that impact several DOT modal administrations, including the need for DOT to address new vehicle and aviation technologies—such as dealing with cybersecurity and privacy concerns—and the changing trends in how and where freight moves through our nation’s transportation system. For example, in 2014, we recommended that DOT include a written statement in its national freight strategic plan articulating the federal role in helping to mitigate the impacts of projected increases in local-freight congestion. We have also reported on emerging issues that individual DOT modal administrations need to address, such as for PHMSA to ensure the safe transportation of domestically produced oil and gas, which has increased more than five times in recent years. DOT officials also told us that they believe that as an agency, DOT is having difficulty quickly identifying and reacting to emerging issues. While officials from some modal administrations highlighted efforts—such as performance plans and policy meetings—to regularly and strategically discuss new areas in need of DOT’s attention, officials noted that DOT is not always nimble enough to respond to emerging issues. Officials cited the rapid pace of technology development, data and coordination challenges, and the overall size and diversity of the transportation system as a few of the reasons DOT cannot always react quickly. While DOT officials noted ongoing initiatives within its modal administrations intended to address challenges in the five areas identified by experts, they agreed that more could be done but did not identify plans to conduct a department-wide review in these areas. The current administration, however, recently released an Executive Order and the Budget Blueprint indicating that federal agencies, including DOT, are expected to continue to assess their ability to efficiently and effectively meet their missions. In addition, standards for internal control in the federal government highlight the need for federal agencies to periodically review, particularly as changes develop, whether their policies and procedures are relevant, effective, and address risks. We have noted that a review of this type should include an action plan to implement corrective measures. Such an evaluation could help DOT to leverage the success of initiatives within the modal administrations and define root causes and solutions, including identifying necessary steps, to address the areas discussed and more effectively implement programs within and across its modal administrations. As DOT considers potential reorganization plans to improve its efficiency, effectiveness, and accountability as required by the recent Executive Order, as well as how it will implement the administration’s Budget Blueprint, it will be important for DOT to take a holistic look at the department. Having considered the costs and benefits of restructuring how DOT is organized, experts told us that DOT can fulfill its many missions through its existing organizational structure. Yet, experts also recognize that DOT faces a growing number of challenges, including adapting quickly to new technological innovations, which they said will continue to blur the lines between the modes. DOT is undertaking a number of efforts to address these challenges within its modal administrations, but operational improvements could be achieved in several broad areas: (1) collaboration and coordination; (2) data quality and analytics; (3) regulation development; (4) project delivery processes; and (5) emerging issues. While DOT must work with many transportation stakeholders—including Congress, state and local governments, and the private sector—to address challenges in these areas, it is important that DOT take the lead in efforts to ensure a safe and efficient transportation system. Undertaking a department-wide review of the areas experts identified, particularly as they relate across the modal administrations, provides an opportunity for DOT to assess how it can more effectively achieve its missions and how best to position the department to proactively address the challenges it faces. To leverage and build upon the ongoing efforts within individual DOT modal administrations and to address concerns raised by experts regarding collaboration and coordination, data quality and analytics, regulation development, project delivery processes, and addressing emerging issues, we recommend that the Secretary of Transportation: (1) conduct a department-wide review of DOT’s current efforts to address these concerns; and (2) develop an action plan with specific steps to implement improvements, as identified, in these areas. We provided a draft of this report to the DOT and OMB for their review and comment. We also provided copies of this report to the 18 experts who participated in our meeting in September 2016. In written comments, reproduced in appendix III, DOT agreed with our recommendation and provided several recent examples of actions taken to improve the department’s operational performance, including the creation of a centralized permitting center, establishing a regulatory- reform task force, and hiring new employees in leadership positions with expertise in data analytics. DOT officials also indicated that following the conclusion of our audit work, several new planning efforts had begun in response to the recently released executive order, including two working groups intended to identify efficiencies in DOT’s mission as well as efforts to solicit employee feedback on ways to improve DOT’s efficiency and effectiveness. We did not have the opportunity to evaluate these initiatives. DOT and experts also provided technical comments, which we incorporated as appropriate. OMB did not comment on this report. We are sending copies of this report to interested congressional committees, the Secretary of the Department of Transportation, the Director of the Office of Management and Budget, and other interested parties. In addition, this report will be available at no charge on GAO’s website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-2834 or flemings@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix V. This report addresses the following objectives: (1) what activities multiple Department of Transportation (DOT) modal administrations perform to fulfill their missions and how, if at all, DOT coordinates these activities, and (2) according to experts, what, if any, organizational changes or operational changes could enable DOT to more efficiently and effectively carry out its missions. To identify activities performed by multiple DOT administrations and how those activities are coordinated, we reviewed DOT’s organizing statutes and amendments, and documentation on DOT’s overall mission and the missions of the nine modal administrations, including strategic plans, budget documents, and organizational manuals. We identified the nine DOT modal administrations to include in our work by reviewing DOT’s public website and relevant laws and statutes. We identified the missions of DOT as a whole and the modal administrations by reviewing mission statements and organizing statutes. In those cases in which we were not able to find the missions of an administration in statute, we identified the missions by reviewing their strategic plans or publicly available mission statements. We identified the activities conducted by each DOT administration and Office of the Secretary of Transportation (OST) by reviewing their organizational manuals, if available, and fiscal year 2016 budget requests. We also used these sources to identify the program offices contained within each administration and the missions and activities of each of those offices. While we identified DOT activities, we did not evaluate how effective these activities are at fulfilling DOT’s missions. We identified areas of similarity within the list of activities by reviewing DOT’s strategic plan to select general outcomes related to DOT’s missions and objectives that more than one DOT administration intends to achieve. We identified 19 of these activity areas, which broadly related to administrative functions, economic development and consumer protection, operating transportation systems, research, safety, and supporting infrastructure projects. Finally, we grouped each of the activities we identified into one of these functional categories. For example, as part of our analysis of documentation from multiple modal administrations websites and missions statements, we identified numerous activities related to developing rulemaking, guidance, and policy intended to improve the safety of the transportation system. We then noted that one of DOT’s mission priorities in its strategic plan is to develop transportation safety regulations. Based on this evidence, we determined this was an area in which multiple DOT administrations performed activities, which we named Developing Safety Regulations. To collect expert views on organizational or other changes that could enable DOT to more efficiently and effectively carry out its missions, in September 2016, with the assistance of the National Academies of Sciences, Engineering, and Medicine (National Academies), we convened a one and a half day GAO meeting with 18 experts. Participants were identified and recommended by the National Academies and approved by us using several criteria, including experience with multiple modes of transportation and DOT administrations, and expertise in organizational change, among others. Experts included former DOT officials, representatives from local and state transportation agencies, private businesses that use our nation’s transportation system, and other experts in transportation policy and organizational change management (see table 2). We asked the expert meeting participants to comment on DOT’s organizational structure and potential areas for improvement in the six functional categories in which DOT performs activities identified in objective 1: administrative functions, economic development and consumer protection, operating transportation systems, research, safety, and supporting infrastructure projects. Following the meeting, two analysts conducted a content analysis of the expert meeting transcript using NVivo software to identify the areas for improvement that were discussed most frequently during the expert meeting. Each analyst independently reviewed one half of the transcript to identify instances where the areas were discussed. Once each analyst had completed going through their respective sections, the other analyst verified the coding of the other analyst. If there was a disagreement, the analysts discussed their assessment and would come to a final determination on the categorization. Based on the results of our content analysis, we determined 15 areas for improvement were the most frequently discussed. The 15 areas were used to develop a brief follow-up questionnaire for the experts to verify what was discussed at the meeting. We conducted pretests with two of the experts before emailing the finalized PDF questionnaire form to all 18 experts who attended the meeting. We received 17 out of 18 responses from our experts (94 percent response rate). Additionally, we developed a list of follow-up questions for DOT officials from OST and all nine modal administrations, similar to the questions we asked experts in the questionnaire. We reviewed the responses received from experts and DOT officials to determine which of the 15 areas were considered to be the most important to address or as having the biggest potential payoff in helping DOT more efficiently and effectively carry out its missions. Based on this analysis, we identified five areas to discuss in greater detail in our report—(1) collaboration and coordination; (2) data quality and analytics; (3) regulation development; (4) project delivery processes; and (5) addressing emerging issues. The views represented by the experts from whom we gathered information are not generalizable to those of all experts on DOT’s organizational structure and operations; however, we were able to secure the participation of a diverse, highly qualified group of experts and believe their views provide a balanced and informed perspective on the topics discussed. In addition, we reviewed GAO reports issued in the past five years and DOT OIG reports that discussed the areas for improvement experts identified, many of which included recommendations for DOT. We identified the most relevant prior GAO work in the 5 areas identified by experts and DOT officials. To address both of our objectives, we interviewed DOT officials from OST and all nine modal administrations. To gather background information, we also interviewed additional stakeholders with a range of transportation experience including former DOT officials, representatives from state and local transportation agencies, transportation stakeholders from consulting firms, non-profits and think tanks, as well as academics in the field of organizational change (see table 3). We conducted this performance audit from May 2016 to May 2017 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. We identified 19 areas in which more than one administration within the United States Department of Transportation (DOT) performs activities (see table 1). Broadly, these areas fall into six functional categories: administrative, economic development and consumer protection, operating transportation systems, research, safety, and supporting infrastructure projects. These areas outline the primary areas of activities that DOT undertakes to achieve its intended outcomes; the areas are not a comprehensive list of every area in which DOT performs activities. We identified the DOT administrations that perform activities in these areas, and tables 4 to 22 show examples of these activities. GAO and the Department of Transportation’s Office of Inspector General (DOT OIG) have bodies of work related to topics the experts we spoke with most frequently cited as being important for DOT to address. Below are reports issued by GAO and DOT OIG in each of these areas: (1) collaboration and coordination; (2) data quality and analytics; (3) project delivery processes; (4) regulation development; and (5) addressing emerging issues. GAO. Train Braking: DOT’s Rulemaking on Electronically Controlled Pneumatic Brakes Could Benefit from Additional Data and Transparency. GAO-17-122. Washington, D.C.: October 12, 2016. GAO. Air Traffic Control: FAA Needs a More Comprehensive Approach to Address Cybersecurity as Agency Transitions to NextGen. GAO -15-370. Washington, D.C.: April 14, 2015. GAO. Drug-Impaired Driving: Additional Support Needed for Public Awareness Initiatives. GAO-15-293. Washington, D.C.: February 24, 2015. GAO. Managing for Results: Implementation Approaches Used to Enhance Collaboration in Interagency Groups. GAO-14-220. Washington, D.C.: February 14, 2014. GAO. Managing for Results: Key Considerations for Implementing Interagency Collaborative Mechanisms. GAO-12-1022. Washington, D.C.: September 27, 2012. GAO. Transportation-Disadvantaged Populations: Federal Coordination Efforts Could be Further Strengthened. GAO-12-647. Washington, D.C.: June 20, 2012. GAO. Pipeline Safety: Collecting Data and Sharing Information on Federally Unregulated Gathering Pipelines Could Help Enhance Safety. GAO-12-388. Washington, D.C.: March 22, 2012. GAO. Intelligent Transportation Systems: Improved DOT Collaboration and Communication Could Enhance the Use of Technology to Manage Congestion. GAO-12-308. Washington, D.C.: March 19, 2012. OIG, DOT. FHWA Needs to Strengthen Its Oversight of State Transportation Improvement Programs. ST2017019. Washington, D.C.: January 5, 2017. OIG, DOT. Insufficient Guidance, Oversight, and Cooperation Hinder PHMSA’s Full Implementation of Mandates and Recommendations. ST- 2017-002. Washington, D.C.: October 14, 2016. OIG, DOT. FAA Lacks a Clear Process for Identifying and Coordinating NextGen Long-Term Research and Development. AV-2016-094. Washington, D.C.: August 25, 2016. OIG, DOT. Improvements Needed in FMCSA’s Plan for Inspecting Buses at the United States-Mexico Border. MH-2014-007. Washington, D.C.: November 26, 2013. GAO. Train Braking: DOT’s Rulemaking on Electronically Controlled Pneumatic Brakes Could Benefit from Additional Data and Transparency. GAO-17-122. Washington, D.C.: October 12, 2016. GAO. Motor Carriers: Better Information Needed to Assess Effectiveness and Efficiency of Safety Interventions. GAO-17-49. Washington, D.C.: October 27, 2016. GAO. West Coast Ports: Better Supply Chain Information Could Improve DOT’s Freight Efforts. GAO-17-23. Washington, D.C.: October 31, 2016. GAO. Freight Transportation: Developing National Strategy Would Benefit from Added Focus on Community Congestion Impacts. GAO-14-740. Washington, D.C.: September 19, 2014. GAO. Federal Motor Carrier Safety: Modifying the Compliance, Safety, Accountability Program Would Improve the Ability to Identify High Risk Carriers. GAO-14-114. Washington, D.C.: February 3, 2014. GAO. Cargo Tank Trucks: Improved Incident Data and Regulatory Analysis Would Better Inform Decisions about Safety Risks. GAO-13-721. Washington, D.C.: September 11, 2013. GAO. Pipeline Safety: Better Data and Guidance Needed to Improve Pipeline Operator Incident Response. GAO-13-168. Washington, D.C.: January 23, 2013. GAO. Pipeline Safety: Collecting Data and Sharing Information on Federally Unregulated Gathering Pipelines Could Help Enhance Safety. GAO-12-388. Washington, D.C.: March 22, 2012. GAO. Motor Carrier Safety: New Applicant Reviews Should Expand to Identify Freight Carriers Evading Detection. GAO-12-364. Washington, D.C.: March 22, 2012. OIG, DOT. FRA’s Oversight of Hazardous Materials Shipments Lacks Comprehensive Risk Evaluation and Focus on Deterrence. ST-2016-020. Washington, D.C.: February 24, 2016. OIG, DOT. Inadequate Data and Analysis Undermine NHTSA’s Efforts to Identify and Investigate Vehicle Safety Concerns. ST-2015-063. Washington, D.C.: June 18, 2015. OIG, DOT. Program and Data Limitations Impede the Effectiveness of FAA’s Hazardous Materials Voluntary Disclosure Reporting Program. AV- 2015-034. Washington, D.C.: March 13, 2015. GAO. Train Braking: DOT’s Rulemaking on Electronically Controlled Pneumatic Brakes Could Benefit from Additional Data and Transparency. GAO-17-122. Washington, D.C.: October 12, 2016. GAO. Commercial Space: FAA Should Examine How to Appropriately Regulate Space Support Vehicles. GAO-17-100. Washington, D.C.: November 25, 2016. GAO. Federal Aviation Administration: Commercial Space Launch Industry Developments Present Multiple Challenges. GAO-15-706. Washington, D.C.: August 25, 2015. GAO. Oil and Gas Transportation: Department of Transportation Is Taking Actions to Address Rail Safety, but Additional Actions Are Needed to Improve Pipeline Safety. GAO-14-667. Washington, D.C.: August 21, 2014. GAO. Cargo Tank Trucks: Improved Incident Data and Regulatory Analysis Would Better Inform Decisions about Safety Risks. GAO-13-721. Washington, D.C.: September 11, 2013. GAO. Aviation Rulemaking: Further Reform Is Needed to Address Long- standing Problems. GAO-01-821. Washington, D.C.: July 9, 2001. OIG, DOT. Top Management Challenges for Fiscal Year 2017. PT-2017- 007. Washington, D.C.: November 15, 2016. OIG, DOT. Insufficient Guidance, Oversight, and Cooperation Hinder PHMSA’s Full Implementation of Mandates and Recommendations. ST- 2017-002. Washington, D.C.: October 14, 2016. GAO. DOT Discretionary Grants: Problems with Hurricane Sandy Transit Grant Selection Process Highlight the Need for Additional Accountability. GAO-17-20. Washington, D.C.: December 14, 2016. GAO. Rail Grant Oversight: Greater Adherence to Leading Practices Needed to Improve Grants Management. GAO-16-544. Washington, D.C.: May 26, 2016. GAO. Public Transit: Updated Guidance and Expanded Federal Authority Could Facilitate Bus Procurement. GAO-15-676. Washington, D.C.: September 10, 2015. GAO. Intercity Passenger Rail: Recording Clearer Reasons for Awards Decisions Would Improve Otherwise Good Grantmaking Practices. GAO-11-283. Washington, D.C.: March 10, 2011. OIG, DOT. Vulnerabilities Exist in Implementing Initiatives Under MAP-21 Subtitle C to Accelerate Project Delivery. ST2017029. Washington, D.C.: March 6, 2017. OIG, DOT. Top Management Challenges for Fiscal Year 2017. PT-2017- 007. Washington, D.C.: November 15, 2016. OIG, DOT. FHWA Does Not Effectively Ensure States Account for Preliminary Engineering Costs and Reimburse Funds as Required. ST- 2016-095. Washington, D.C.: August 25, 2016. OIG, DOT. FTA Monitored Grantees’ Corrective Actions, but Lacks Policy and Guidance to Oversee Grantees with Restricted Access to Federal Funds. ST-2016-058. Washington, D.C.: April 12, 2016. OIG, DOT. Weak Internal Controls for Collecting Delinquent Debt Put Millions of DOT Dollars at Risk. FI-2015-065. Washington, D.C.: July 9, 2015. GAO. Train Braking: DOT’s Rulemaking on Electronically Controlled Pneumatic Brakes Could Benefit from Additional Data and Transparency. GAO-17-122. Washington, D.C.: October 12, 2016. GAO. West Coast Ports: Better Supply Chain Information Could Improve DOT’s Freight Efforts. GAO-17-23. Washington, D.C.: October 31, 2016. GAO. Vehicle Cybersecurity: DOT and Industry Have Efforts Under Way, but DOT Needs to Define Its Role in Responding to a Real-world Attack. GAO-16-350. Washington, D.C.: March 24, 2016. GAO. Vehicle Safety: Enhanced Project Management of New Information Technology Could Help Improve NHTSA’s Oversight of Safety Defects. GAO-16-312. Washington, D.C.: February 24, 2016. GAO. Unmanned Aerial Systems: FAA Continues Progress toward Integration into the National Airspace. GAO-15-610. Washington, D.C.: July 16, 2015. GAO. Air Traffic Control: FAA Needs a More Comprehensive Approach to Address Cybersecurity as Agency Transitions to Next Gen. GAO-15-370. Washington, D.C.: April 14, 2015. GAO. Freight Transportation: Developing National Strategy Would Benefit from Added Focus on Community Congestion Impacts. GAO-14-740. Washington, D.C.: September 19, 2014. GAO. Oil and Gas Transportation: Department of Transportation Is Taking Actions to Address Rail Safety, but Additional Actions Are Needed to Improve Pipeline Safety. GAO-14-667. Washington, D.C.: August 21, 2014. GAO. Rail Safety: Improved Human Capital Planning Could Address Emerging Safety Oversight Challenges. GAO-14-85. Washington, D.C.: December 9, 2013. GAO. Intelligent Transportation Systems: Vehicle-to-Vehicle Technologies Expected to Offer Safety Benefits, but a Variety of Deployment Challenges Exist. GAO-14-13. Washington, D.C.: November 1, 2013. GAO. Intelligent Transportation Systems: Improved DOT Collaboration and Communication Could Enhance the Use of Technology to Manage Congestion. GAO-12-308. Washington, D.C.: March 19, 2012. OIG, DOT. FAA Lacks a Risk-Based Oversight Process for Civil Unmanned Aircraft Systems. AV-2017-018. Washington, D.C.: December 1, 2016. OIG, DOT. Top Management Challenges for Fiscal Year 2017. PT-2017- 007. Washington, D.C.: November 15, 2016. OIG, DOT. DOT Cybersecurity Incident Handling and Reporting is Ineffective and Incomplete. FI-2017-001. Washington, D.C.: October 13, 2016. OIG, DOT. FAA Faces Significant Barriers to Safely Integrate Unmanned Aircraft Systems Into the National Airspace System. AV-2014-061. Washington, D.C.: June 26, 2014. In addition to the contact named above, Maria Edelstein (Assistant Director), Matthew Cook (Analyst in Charge), Paul Aussendorf, Dan Bertoni, Melissa Bodeau, Steve Cohen, Cathy Colwell, Alex Fedell, Cam Flores, Farrah Graham, Brandon Haller, Phil Herr, Catherine Kim, Hannah Laufe, Heather MacLeod, Ned Malone, Sara Ann Moessbauer, Josh Ormond, Carl Ramirez, Alex Severn, Sharon Silas, Sarah Veale, Sara Vermillion, and Susan Zimmerman made significant contributions to this report.
DOT was established over 50 years ago, in part, to build, maintain, and oversee a vast national transportation system. Millions of Americans rely on this system every day to travel and receive goods and services. DOT is organized into nine modal administrations that are generally responsible for activities related to specific transportation modes, such as air, rail, public transit, and highways. GAO was asked to examine how well DOT's organizational structure enables DOT to address today's transportation challenges. This report addresses (1) activities performed by multiple DOT administrations to fulfill their missions and how, if at all, DOT coordinates these activities, and (2) expert opinions on what, if any, organizational or operational changes could enable DOT to more efficiently and effectively carry out its missions. GAO reviewed documentation on DOT's missions, interviewed DOT officials, and worked with the National Academies of Science, Engineering, and Medicine to convene a meeting with transportation and organizational-change experts. Experts were selected for their experience working with multiple modes of transportation and expertise in organizational change, among other factors. The United States Department of Transportation's (DOT) nine modal administrations conduct a range of similar activities that are generally intended: (1) to achieve different goals (e.g., to protect consumers or improve motor vehicle efficiency); (2) to serve different recipients (e.g., airlines, railroads); or (3) to meet different requirements (e.g., grant and credit programs specified in statute). DOT has numerous efforts to coordinate similar activities across administrations, such as formal coordinating bodies that bring together staff from multiple modes on a variety of topics. DOT also has processes designed to coordinate regulations' development and to approve infrastructure projects. Experts told GAO that DOT could make operational improvements but does not need to implement organizational changes, to help efficiently and effectively carry out its missions. Experts identified five areas: Collaboration and coordination: Additional efforts to collaborate among the nine modal administrations, state and local governments, and other federal agencies would better support the development of transportation projects. For example, experts stated DOT could improve the effectiveness of internal collaborative groups by including senior-level officials who could provide leadership and have the authority to make decisions. Data quality and analytics: Prioritizing which data to collect and improving analytic capabilities could help DOT ensure data are effectively used. Experts stated DOT could do a better job identifying and improving data quality to answer specific, transportation-related questions. Regulation development: Improving how regulations are developed could help DOT ensure the agency's priorities are addressed and coordinated among all stakeholders. Experts stated that DOT could improve the quality and timeliness of its regulations by seeking earlier input from stakeholders. Project delivery processes: Streamlining and making the project delivery processes more consistent across modal administrations could reduce barriers and challenges for state and local governments. For example, experts suggested creating a central position to help state and local governments navigate the environmental review process. Addressing emerging issues: Proactively focusing on how to address technological advancements (e.g., autonomous vehicles) and other emerging issues (e.g., safely transporting domestic oil and gas) could help DOT achieve its missions more efficiently and effectively. For example, experts were concerned that DOT was falling behind the private sector's need for research and specific regulations for autonomous vehicles. DOT officials agreed improvements are needed across DOT within the areas identified by experts. However, DOT did not identify plans to conduct a department-wide review. The administration recently released documents requiring federal agencies, including DOT, to assess their ability to efficiently and effectively meet their missions. In addition, federal internal control standards require agencies to assess and, typically, develop an action plan to determine whether their policies are effective. Such an assessment could help DOT to improve how it implements programs across all of its modal administrations. DOT should conduct a department-wide review of its current efforts to address issues in the areas experts identified for improvement and develop an action plan to implement improvements, as identified, in these areas. DOT concurred with these recommendations and cited new initiatives to improve the department.
Science and technology is traditionally divided into three broad categories: basic research, applied research, and advanced technology development. Basic research attempts to produce new knowledge in a scientific or technological area. This research is not associated with a specific weapon system. Applied research supports the development and maturation of new technologies for a defined military application. Advanced development entails large-scale hardware development and technology integration in more operationally realistic settings. Research and development beyond these categories is done in support of a specific weapon system. In the Air Force, the focal point for science and technology investments is the Air Force Research Laboratory. It was created in 1997 to centrally manage all Air Force science and technology efforts. Previously, the Air Force operated 13 different laboratories across the country. The present Air Force Research Laboratory, headquartered at Wright Patterson Air Force Base, comprises 10 technology directorates. Nine directorates handle applied and advanced development projects. The 10th directorate, the Office of Scientific Research, manages the Air Force’s basic research projects. The Air Force Research Laboratory biennially generates a comprehensive strategic plan that supports the national military strategy and the Air Force Strategic Plan. In the past, the Air Force was a leader in high-technology exploration. According to a January 2000 Air Force Association study, the Air Force was the unquestioned leader in science and technology investments at the end of the Cold War. In the 1990s, however, it dropped to third place, behind the Army and Navy. The Congress has been concerned about the Air Force’s level of investment in science and technology. For fiscal year 2000, the House and Senate Armed Services Committees noted that the Air Force in particular, had failed to comply with the science and technology funding objective specified in the prior year’s authorization act, thus jeopardizing the stability of the technology base and increasing the risk of failure to maintain technological superiority in future weapons systems. In 2001, the Scientific Advisory Board found that the Air Force’s science and technology program needed to improve its planning process and generate stronger user support and sponsorship. It also found weaknesses in the connection between operational requirements and science and technology programs, which inhibited the prioritization of investments. The Air Force complied with the overall requirements of the National Defense Authorization Act regarding long-term challenges. (See table 1 for the checklist of provisions.) The act defined a long-term challenge as a high-risk, high-payoff effort that will provide a focus for research in the next 20 to 50 years. To identify potential long-term challenges, an Air Force review team obtained over 140 ideas from a variety of sources in the scientific community. Ideas ranged from cloaking technologies (the deceptive masking of assets) and holodeck command capabilities (virtual reality battlespace control) to micro weapons like ubiquitous “battle bees” (miniaturized unmanned air vehicles) and cyber warfare technologies. The team evaluated these ideas to ensure that they complied with the three primary criteria specified in the act. The potential long-term challenges had to involve (1) compelling Air Force requirements; (2) high-risk, high- payoff areas of exploration; and (3) very difficult but achievable results. Yet another provision in the act required that the team should avoid selecting projects that are linear extensions of ongoing science and technology projects. This provision was more difficult to assess, but after additional deliberations, the team determined that the following six challenges satisfied the criteria in the act: Finding and Tracking. To provide the decision maker with target quality information from anywhere in near real-time. Command and Control. To assess, plan, and direct aerospace operations from anywhere or from multiple locations in near real-time. Controlled Effects. To create precise effects rapidly, with the ability to retarget quickly against complex target sets anywhere, anytime, for as long as required. Sanctuary. To protect our total force from natural and man-made hazards or threats, allowing us to operate anywhere with the lowest risk possible. Rapid Aerospace Response. To respond as quickly as necessary to support peacetime operations or crises and move this response to another location very rapidly if needed. Effective Aerospace Persistence. To sustain the flow of equipment and supplies as well as the application of force for as long as required. Once the long-term challenges were identified, the Air Force followed the planning process specified in the act. For example, it established six work groups tasked with identifying possible approaches to address these challenges. The groups had about 9 weeks to complete their work. As required, a technical coordinator, assisted by a management coordinator, headed each group. Each group also complied with the requirement to hold a workshop within the science and technology community to obtain suggestions on possible approaches and promising areas of research. The workshop participants satisfied the requirement to identify current work that addresses the challenge, deficiencies in current work, and promising areas of research. Finally, the groups were also expected to select projects that were not linear extensions of current science and technology work. This particular provision was not easy for some groups to define. Some pondered the relative nature of the term. For example, a user would perceive “nonlinearity” differently than a scientist. Another group characterized it as a quantum leap in capability. Another definition associated nonlinear projects with multiple-capability dimensions. For example, if doubling the payload capacity of a weapon is a linear extension, then doubling the payload, speed, and range of the weapon would also be a nonlinear extension. Regardless of the definition selected, each group addressed the issue in its planning process. Each group summarized the results of its workshop in a briefing that contained enabling capabilities, research areas, technology roadmaps, and associated funding requirements. In many cases, the level of funding projections was double or triple the level of the planned budget. For example, the level of funding projections for basic research in physics, materials, mathematics, and computer science was more than triple the planned investment levels. The Air Force complied with the overall provisions of the National Defense Authorization Act regarding short-term objectives. (See table 2 for the checklist of provisions.) As required, the Air Force established a task force consisting of representatives from the Air Force Chief of Staff and combatant commands to identify short-term objectives. The task force obtained about 58 ideas from the requirements, user, and acquisition communities as specified in the act. Because of the mandated short-term focus, most of the input involved enhancing or accelerating ongoing research efforts—not initiating entirely new areas of research. These ideas included maintaining aging aircraft, combat identification, and time-critical targeting. While these are not new concepts, they still present significant technological challenges. We have recently reported on weaknesses in each of these areas. The task force reviewed each idea to ensure that it complied with the criteria in the act: (1) to involve compelling Air Force requirements, (2) to have support within the user community, and (3) to likely attain the desired benefits within 5 years. To ensure that each idea represented a compelling Air Force requirement, the task force evaluated each idea against the Air Force’s core competencies and critical future capabilities. To meet the user support requirement, the task force linked each potential short-term objective to specific mission needs and requirements documents. The objectives were reviewed and approved by the Air Force’s corporate structure. To ensure that the projects selected would achieve results in 5 years, the task force decided to use the technology maturity levels highlighted in a recent GAO report. The following is a list of the eight short-term objectives. Target Location, Identification, and Track. To detect, locate, track, and identify air/ground targets anytime in countermeasure environments in near real time. Command, Control, Communication, Computers, and Intelligence. To dynamically assess, plan, and execute global missions. Precision Attack. To engage air and ground targets from manned and unmanned vehicles with the precision and speed necessary to bring about decisive results. Space Control. To increase the survivability of critical space assets. Access to Space. To improve access to space through responsive, cost- effective launch systems. Aircraft Survivability and Countermeasures. To improve the ability to survive and operate against airborne and ground threats in all environments. Sustaining Aging Systems. To extend the service life of aging aircraft and space launch systems with reduced manpower, reduced total ownership costs, and enhanced reliability. Air Expeditionary Forces Support. To provide air expeditionary forces with the ability to operate with highly responsive and agile combat support forces. After the objectives were identified, the Air Force complied with the planning process specified in the act. As required, it established an integrated product team to address each short-term objective. Each team was composed of a cross-cutting mix of officials from the requirements, user, and science and technology communities, as the act specified. According to many of the short-term objective team leaders, the cross- cutting nature of the teams was very productive. Not only did they believe that their planning was enhanced by the direct input from users and requirements officials, they also believed that the expertise and assistance from scientists in other laboratory directorates improved the process. Each team satisfied the requirement to identify, define, and prioritize the enabling capabilities necessary to meet the objectives. As required, each team identified the deficiencies in the enabling capabilities and projects necessary to eliminate the deficiencies. The teams summarized their work in briefings that contained prioritized lists of enabling capabilities, a definition of the objectives, technology roadmaps, and budget spreadsheets. The spreadsheets detailed the current and additional funding required to achieve the objectives. Obtaining the additional funding was a concern to many teams. Many teams identified funding requirements that greatly exceeded current funding levels; it was not uncommon for proposed annual funding levels to double or triple the level currently projected. For example, the Command, Control, Communication, Computer, and Intelligence team proposed programs that would require from 2.6 to over 4 times the planned annual investment. Another concern was the 15-year gap between the short-term objective and long-term challenge planning. According to the act’s provisions, the short- term teams were required to focus on technologies that would be mature in 5 years; the long-term teams focused on technologies needed 20 to 50 years in the future. According to laboratory officials, this mid-term gap constitutes much of the normal science and technology planning effort and represents a critical point in science and technology project development. This time frame is where science and technology can have a significant impact. The Air Force currently addresses this time frame in its normal planning process. In addition, this period is covered in the long-term challenge technology roadmaps, at least for the research efforts associated with those six challenges. The Air Force satisfied the top-level review requirements in the act. (See table 3 for the checklist of provisions.) The act required the secretary of the Air Force to conduct a timely review of the science and technology programs and to assess the budgetary resources needed to address the long- and short-term needs. The secretary delegated this responsibility to the deputy assistant secretary for Science, Technology and Engineering. The deputy complied with the requirement to conduct a review of the long- and short-term science and technology programs within the 1 year time limit specified in the act. On October 25, 2001, the deputy briefed the secretary on the final results and received his approval. The act also required the secretary to assess the fiscal year 2001 budget resources used and needed to adequately address science and technology needs. After consultation with representatives from the House and Senate Armed Services Committees, however, the deputy changed the budget baseline to fiscal year 2002. This was done to reflect the science and technology budget realignment occurring in fiscal year 2002. The deputy assessed the 2002 budget resources planned for science and technology programs and determined that they were adequately funded. The deputy noted, however, that the current level of funding would enable the programs to pursue the minimum level of scientific research. Additional funding would be required to pursue other projects. The deputy also complied with the provision to evaluate whether the ongoing and projected science and technology programs addressed the long- and short-term science and technology needs. He determined that the programs did address these needs, thus obviating the requirement to develop a course of action for science and technology programs that do not address the long- term challenges or short-term objectives. Finally, the act required the secretary to review the long-term challenges and short-term objectives and to identify additional work that should be undertaken to meet the challenges and objectives. The deputy complied with both provisions. Not only did he review the results of the long- and short-term planning efforts and identify additional work, but he also directed that the additional work be incorporated into the laboratory’s future planning, programming, and budget decisions. The deputy was in a unique position to address these requirements. He served not only as the overall review director for the science and technology planning process, but also as the chairman of the short-term objective task force. As a result, the deputy had many opportunities to review the work of both the long- term challenge and particularly the short-term objective planning teams. Because the Air Force complied with the provisions of the act, we are not making any recommendations in this report. The Department of Defense has reviewed this report and concurs with its contents. We conducted our work from May 2001 to January 2002 in compliance with generally accepted auditing standards. Additional information on our scope and methodology is located in appendix I. If you have any questions about the information contained in this letter, please call me at (202) 512-4530. Major contributors to this work included Robert Murphy, Rae Ann Sapp, and Kristin Pamperin. To document the extent to which the Air Force complied with the long- term planning process specified in the National Defense Authorization Act for Fiscal Year 2001, we obtained appointment letters, membership rosters, initial guidance and work plans, meeting schedules, biographies of each technical coordinator, and a comprehensive listing of the initial long-term challenge ideas. We also obtained minutes from team meetings, weekly activity reports, E-mail communications, interim and final briefing reports, associated studies, workshop agendas and results, current and projected budget spreadsheets, capability lists, and promising research areas. To discuss how each team addressed the act’s provisions, we met with each long-term challenge technical coordinator and management coordinator. We also met with officials from the Air Force Research Laboratory’s headquarters and the Office of the Deputy Assistant Secretary for Science, Technology, and Engineering. Finally, we physically observed the proceedings of one long-term challenge workshop over the course of 2 days. To determine whether each provision was addressed, we prepared summary checklists for each long-term challenge and keyed the data back to a specific provision of the act. To document the extent to which the Air Force complied with the short- term objective planning process specified in the act, we obtained appointment letters, membership rosters, initial guidance and work plans, meeting schedules, and a comprehensive listing of the initial short-term objective ideas. We also obtained weekly activity reports, short-term objective descriptive summaries, meeting minutes, E-mail communications, interim and final briefing reports, current and projected budget spreadsheets, and prioritized listings of enabling capabilities. To discuss how each team addressed the act’s provisions, we met with each short-term objective director. We also met with officials from the Air Force Research Laboratory’s headquarters and the Office of the Deputy Assistant Secretary for Science, Technology, and Engineering. Finally, we physically observed the proceedings of one short-term objective workshop. To evaluate whether each provision was addressed, we prepared summary checklists for each short-term objective and keyed the data back to a specific provision of the act. To document the extent to which the Air Force complied with the program and budgetary resource assessment process specified in the act, we obtained the final weekly activity reports, internal correspondence, review schedule, and overview briefing. To evaluate whether each provision was addressed, we prepared a summary checklist and obtained a written summary of the Air Force’s actions to comply with the provisions. Finally, we discussed the Air Force’s actions with representatives from the Office of the Deputy Assistant Secretary for Science, Technology, and Engineering. The General Accounting Office, the investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full-text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO E-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to daily e-mail alert for newly released products” under the GAO Reports heading. Web site: www.gao.gov/fraudnet/fraudnet.htm, E-mail: fraudnet@gao.gov, or 1-800-424-5454 or (202) 512-7470 (automated answering system).
Congress and the scientific community are concerned that the Air Force's investment in science and technology may be too low to meet the challenges presented by new and emerging threats. The National Defense Authorization Act for Fiscal Year 2001 requires the Air Force to review its science and technology programs to assess the budgetary resources currently used and those needed to adequately address the challenges and objectives. GAO found that the Air Force complied with the requirements of section 252 of the act. The Air Force established an integrated product team to identify long-term science and technology challenges and a task force to identify short-term objectives. For each challenge or objective that was identified, the Air Force established teams to identify technological capabilities needed to achieve these goals. Each team chose research projects that addressed the criteria specified in the act. The Air Force also complied with the act's process provisions. The Deputy Assistant Director for Science, Technology and Engineering is required to review the teams' results and to identify any science and technology research not currently funded.
After a possible hazardous waste site is reported to EPA, it is evaluated to determine whether it should be placed on the National Priorities List (NPL), EPA’s list of sites that present serious threats to human health and the environment. The cleanup at an NPL site consists of several phases. First, through the remedial investigation and feasibility study, the conditions at a site are studied, problems are identified, and alternative methods to clean up the site are evaluated. Then, a final remedy is selected, and the decision is documented in a record of decision. Next, during an engineering phase, called the remedial design, technical drawings and specifications are developed for the selected remedy. Finally, in the remedial action phase, a cleanup contractor begins constructing the remedy according to the remedial design. Under the Comprehensive Environmental Response, Compensation, and Liability Act, (CERCLA), which established the Superfund program, EPA must give preference to those long-term cleanup actions that permanently and significantly reduce the volume, toxicity, or mobility of hazardous substances at a site. Under CERCLA, parties responsible for cleaning up sites can include site owners and operators, as well as generators and transporters of hazardous waste. As of August 1998, 1,193 sites were listed on the NPL, and another 56 were proposed for listing. Remedies had been constructed at 526 sites. Since the Superfund program began, 175 sites have been deleted from the NPL. In addition, the program has conducted about 5,000 removal actions—short-term response actions to address emergency and other situations—at NPL and other sites. Our reviews have shown that the Superfund cleanup process can be long and expensive. In March 1997, we reported that the cleanup of nonfederal sites completing the cleanup process in 1996 had taken an average of 10.6 years after placement on the NPL and that remedy selection at sites completing that phase of the cleanup process in 1996 had taken an average of 8.1 years after a site’s listing. In September 1997, we reported on a growing number of expensive Superfund cleanups. We said that in 1996 EPA had spent $10 million or more in that year alone on nine sites, up from two sites with the same level of annual spending in 1989. Spending on the nine sites, which represented less than 3 percent of the sites where EPA spent money for remedial actions, totaled about $238 million, almost 57 percent of remedial action spending at all sites. Beginning in 1993, EPA launched a series of administrative reforms to address a wide range of Superfund concerns. These reforms have attempted to speed up site investigations, choose more cost-effective remedies, reduce litigation, and make other improvements. According to EPA officials, the reforms have begun to work. EPA officials believe that cleanup durations have recently been reduced to an average of 8 years. In addition, the National Remedy Review Board EPA created to review proposed site cleanup remedies, had saved $37 million as of November 1997 through its examination of 20 remedies. EPA has also encouraged its regions to revisit remedy decisions when new information or technical advances indicate that the intended level of health or environmental protectiveness might be achieved at less cost. According to EPA, these remedy updates had saved at least $725 million at over 120 sites as of November 1997. EPA also made it easier for parties with only minimal responsibility for site contamination to settle their liability with lower legal expenses. All 50 states have established their own clean up programs for hazardous waste sites, according to a 1998 survey by the Environmental Law Institute. Some of these state programs can handle highly contaminated sites, whose risks could qualify them for the Superfund program, as well as less dangerous sites. Some states initially patterned their cleanup programs after the Superfund program, but over the years, in an effort to clean up more sites faster and less expensively, have developed their own approaches to cleaning up sites. States accomplish cleanups under three programs: (1) voluntary cleanup programs that allow parties to clean up their sites without enforcement action, often to increase the site’s economic value; (2) brownfields programs that encourage the voluntary cleanup of sites in urban industrial areas to reuse the sites and avoid the expansion of industry into “greenfields,” that is, undeveloped land; and (3) enforcement programs that oversee the cleanup of the most serious sites and force uncooperative responsible parties to clean up their sites.States generally use their voluntary and brownfields programs to clean up less complex sites by offering various incentives to responsible parties, such as reduced state oversight. Some states maintain cleanup funds to pay all or a portion of the costs of cleanups at sites for which responsible parties able to pay for full cleanups cannot be found. Hazardous waste officials in each of the seven states we contacted identified practices used at sites sufficiently contaminated to be included in the Superfund program that they believe achieve faster and less costly cleanups than would occur under the Superfund program. Some of these practices are designed to facilitate faster remedy selection, thereby saving time or money before site cleanup begins. Other practices allow the implementation of less expensive cleanup remedies that officials believe are nonetheless protective of human health and the environment. Two states have adopted practices that reduce the liability of parties who might be responsible for cleanup costs under Superfund’s liability rules. State officials said that these practices have been applied to some state program sites that are sufficiently contaminated to qualify for the NPL. Although the officials described instances in which these practices have yielded benefits, none could formally document the time and cost savings of the practices. State officials from all seven states described practices that they believe facilitate faster remedy selection at contaminated sites, including sites contaminated enough to qualify for Superfund cleanups. These officials said that the use of preestablished cleanup standards or of presumptive remedies, that is, remedies proven to be effective for certain cleanup problems, without extensive consideration of alternate remedies, can expedite remedy selection. Officials of five states said that more flexible public involvement requirements can save time when cleanups are not controversial. Officials representing six state programs said that selecting cleanup remedies for sites on the basis of preestablished standards that specify the maximum concentrations of contaminants in soil and water after cleanup, without conducting time-consuming, site-specific risk assessments, speeds up the remedy-selection process. Illinois, for example, allows the use of “look-up tables” that specify maximum concentration levels for about 150 specific soil and groundwater contaminants. These look-up tables, according to the state officials, quickly and clearly defined the end goal of the site cleanup without a risk assessment, allowing the state and the responsible parties to determine how best to achieve this standard. According to the state officials, the use of the statewide standards offers a time savings when compared to the approach that the sites in EPA’s Superfund program follow. EPA has developed a few soil cleanup standards and may in certain circumstances apply standards from its water programs and state standards at Superfund sites; however, the Superfund regulations require that each site receive a baseline risk assessment showing the need for action. These risk assessments characterize the current and potential threats to human health and the environment posed by contaminants at the site. Officials in both Illinois and Pennsylvania said that eliminating risk assessments can save considerable time. According to an Illinois official, while the duration of risk assessments varies by site, a risk assessment can add as much as 2 years to the remedy-selection process, while a Pennsylvania official said that many months can be saved. Officials in five states said that, in comparison to EPA, they have less rigorous requirements for extensive public involvement in remedy selection. According to New Jersey program officials, state law requires that program officials notify local officials—such as the mayor’s office or the municipal health department—about an impending site cleanup. The state then generally defers the decision about public meetings or other more extensive forms of public outreach to local officials. State officials explained that more extensive public involvement measures are not required—although the state may pursue them if it sees the need—because public meetings are often sparsely attended, and the results of such efforts do not justify the time and resources required. New Jersey’s approach contrasts with EPA’s more extensive public involvement requirements. The Superfund program’s regulations require that at each NPL site, EPA develop a community relations plan describing a community’s information needs and outlining ways that the agency will meet these needs. Furthermore, EPA must notify groups affected by the site of the availability of technical assistance grants that can be used to hire experts to explain technical information about the site. EPA must also allow adequate opportunity for public comment, such as at a public meeting. A transcript of the public meeting must be made available to the public as well. The final cleanup plan must include a response to each significant comment and question received. EPA headquarters officials said that they could tailor community involvement procedures at sites, depending on the circumstances, but according to the officials, the minimum EPA requirements exceeded the simple notice procedures New Jersey used. Texas officials said that their program’s greater use of standard cleanup approaches—known as presumptive remedies—has significantly reduced the time and expense involved in the remedy-selection process. Presumptive remedies are remedies that have proven effective in cleaning up a particular kind of hazardous waste site and would presumably work at similar sites in the future. Such remedies can be viewed as off-the-shelf solutions that can be selected with less study of alternative remedies in the absence of site-specific conditions requiring such consideration. Because presumptive remedies allow the state to focus quickly on one or a limited range of remedies, they can save considerable time and expense in the remedy-selection process. On the basis of a comparison of a limited number of state sites, Texas officials estimated that the studies at presumptive remedy sites were less than half as costly as the full feasibility studies that were conducted at other sites. Officials of three other states said that they also had a greater number of presumptive remedies than did EPA. EPA has also developed presumptive cleanup remedies for some types of NPL sites. However, as table 1 indicates, Texas has developed presumptive remedies for four contaminants—metals, semivolatile organic compounds, pesticides, and polychlorinated biphenyls (PCB)—for which EPA has not.These contaminants are frequently found at Superfund sites. Some states have adopted two practices that state officials believe result in less expensive remedies than those used in the Superfund program and that could be useful even at highly contaminated sites. These practices are (1) greater acceptance of remedies that contain waste on site rather than removing or destroying it and (2) more willingness to assume that sites will be used for industrial or commercial rather than residential purposes. State officials in Illinois, Pennsylvania, and Texas told us that their states’ authorizing statutes do not contain a preference for permanent cleanup remedies. Permanent cleanup remedies are those that remove or treat the principal waste threats, permanently eliminating hazardous waste from the site or reducing the volume, toxicity, or mobility of the waste, through techniques such as incineration or bioremediation. Because the state programs lack such preferences, more nonpermanent containment cleanup remedies may be used. Nonpermanent remedies typically prevent human contact with contaminants by containing the waste in place—by, for example, placing a clay cap or a parking lot over contaminated soil, restricting the land’s use, or placing barriers around the contamination. These remedies tend to be less expensive to implement than permanent ones. Although the states had no studies documenting cost reductions from containment remedies, some officials did cite cases in which cost savings resulted. Pennsylvania officials described how a change in the state cleanup statute that eliminated a preference for permanence had reduced cleanup costs for a site. The remedy proposed by the state for the site under the old statute—a $30 million- to $40 million-permanent remedy consisting of the excavation and treatment of contaminants—was changed with the passage of the new statute to an excavation and containment remedy with a cost of $2 million to $3 million. According to state officials, containment remedies remain protective of human health and the environment if properly controlled and maintained. We reported in April 1997 that cleanup managers for Illinois, Minnesota, and New Jersey estimated that containment methods were used for at least half of the cleanups of contaminated soil in their voluntary cleanup programs. In contrast, the Superfund program operates under the requirements of CERCLA, which establishes a preference for permanent remedies. EPA’s remedy-selection criteria require the selection of a permanent remedy to the maximum extent practicable, though other factors, such as cost and implementation concerns must also be taken into account. EPA officials said that they attempt to adhere to the preference whenever possible. EPA officials also noted that in recent years the agency has moved away from a “treatment for treatment’s sake” approach to one of applying treatment to principal threats. Principal threats include liquids, areas contaminated with high concentrations of toxic compounds, and highly mobile materials. According to a September 1997 EPA analysis, between 1988 and 1993, 70 percent of all remedies dealing with the source of contamination involved treatment, while in 1995, this number dropped to 53 percent. Where contaminants are left on site, EPA requires periodic site reviews to monitor and analyze the implementation and effectiveness of the containment remedies. Some of the states believed that, in setting cleanup standards and selecting remedies, their cleanup programs were more likely than Superfund to determine that sites would be used for future industrial or commercial purposes rather than for residential purposes. The determination of how sites will be used in the future is important because a site whose expected use is industrial or commercial may be cleaned to less strict standards, resulting in less costly cleanups. The states that believe they base site cleanups on assumptions of industrial or commercial uses more readily than EPA have established specific cleanup standards for industrial sites. Until several years ago, EPA generally assumed that a residential use of land was possible in the future, unless there was substantial evidence to the contrary. Because EPA cannot control local zoning or other institutional controls that restrict the land’s use, its guidance suggested that those assessing the sites’ risk assume that in the future the land would be residential even though no one was living there at the time. Critics contended that EPA was assuming residential uses for sites that would be used solely for industrial purposes in the foreseeable future. In 1995, however, EPA issued new guidance for considering future land use in making remedy selection decisions at NPL sites. The guidance encouraged parties cleaning up sites to collect as much information as possible about the site’s future use and to obtain the local community’s consensus regarding its future. Furthermore, EPA officials believe that as a result of this policy, EPA has evolved toward a new balancing of the various mandates contained in CERCLA and that now EPA is as likely as the states to opt for nonresidential future land-use scenarios. An EPA analysis found that only 38 percent of remedies selected in 1995 included residential land-use scenarios. States did not have data to confirm their beliefs that they base cleanup decisions on future industrial or commercial uses of sites more often than does EPA. However, our April 1997 report noted that voluntary cleanup programs in four of the states we covered in our current review used industrial standards most frequently for their cleanups. Some states believed that EPA’s requirement for obtaining a local community’s consensus on the future uses of sites could make it more difficult to consider a land use other than residential. EPA officials, however, believe that early community involvement, with a particular focus on the community’s desired future use of the property associated with an NPL site, can result in a more democratic decision-making process; greater community support for remedies selected as a result of this process; and more expedited, cost-effective cleanups. In addition, EPA officials said that communities were willing to accept cleanups based on continued nonresidential uses of sites. Two of the states we surveyed had adopted policies on the cleanup liability of parties associated with sites that they believed reduce litigation costs and encouraged faster cleanups. These policies involved reducing the liability of site owners and operators for cleanups and making the cleanup of municipal landfills a state responsibility. A Michigan law adopted in 1995 provides that the owners and operators of contaminated sites are liable only if they are responsible for an activity causing a release of hazardous substances into the environment. By contrast, under CERCLA, responsible parties—including owners and operators—are liable regardless of whether they actually caused the release. Thus, anyone seeking to recover cleanup costs under Michigan law from owners and operators must prove causation, while parties seeking to recover cleanup costs under CERCLA generally need not address the issue. The causation standard, according to a state official, results in more expeditious cleanups of facilities because it reduces litigation and transaction costs and disruptions or delays. In addition, a Michigan survey of 33 municipalities indicated that the causation standard has facilitated the redevelopment of sites. A state official also noted, however, that some fraction of the contaminated sites that would have been cleaned up by owners and operators under a strict liability standard may need to be addressed at public expense. Minnesota state officials cited the state’s Closed Landfill Program as a better way to clean up and care for landfills and protect innocent parties. Under this program, the state performs cleanup actions, takes over the long-term operation and maintenance of the cleanup remedy, and reimburses eligible parties for past cleanup costs. Although this approach is costly to the state, which assumes the cost of remediating the site, it reduces litigation costs and protects parties that may have made a very small contribution to site contamination but that could be caught up in litigation if all contributors were liable. According to state officials, it is difficult to assign responsibility to the many parties that contribute to the contamination of municipal landfills, and very small contributors often face potentially bankrupting lawsuits. State officials said that it is preferable that the cost of addressing the problems of closed landfills be viewed as a societal cost. The officials said that Minnesota is the only state that has adopted this program, and it has signed an agreement with EPA to end federal involvement in 10 closed landfills on the NPL within the state. In contrast, EPA’s “polluter pays” approach, according to Minnesota officials, does not work well for most landfills, where a large portion of the waste comes from many small businesses and households. However, EPA is currently mitigating the impact of Superfund liability on the smallest contributors by offering expedited or low-cost settlements to parties that contribute small amounts of hazardous substances. These settlements protect the parties from further litigation. Environmental policy stakeholders that we interviewed, including EPA, state and national environmental organizations, and representatives of state and local government associations, generally did not dispute that the state practices had the efficiency benefits described by state officials. Some of the environmental organizations and a local government organization, however, identified potentials risks of applying these practices to the Superfund program. Since the state practices can reduce cleanup costs, they are generally advantageous for businesses and others responsible for cleaning up sites. (See app. III for the list of stakeholders that we contacted.) The stakeholders that we contacted generally supported the broader use of presumptive remedies in the Superfund program. A representative of Resources for the Future (RFF), an independent environmental research organization, said that the use of presumptive remedies where particular contaminants predominate—as Texas does for pesticides, PCBs and semivolatile organic compounds—is a sound approach because there are only a limited number of ways to deal with certain contaminants. A representative of citizens groups and environmental organizations in Texas noted that presumptive remedies can make sense, as long as the remedies that have been designed are truly protective. Similarly, the representatives of an environmental organization cautioned that the value of presumptive remedies depends on the level of protection they provide. EPA officials cited another advantage of presumptive remedies: consistency in remedy selection from site to site. The stakeholders were more cautious about the use of preestablished standards to specify the goal of the remediation process without a site-specific risk assessment and with reduced public involvement. EPA regional officials believed that the use of the automatically applied standards without a risk assessment is more appropriate for sites that have fairly simple contamination problems, but would not be appropriate for the very large and complex sites that come under the Superfund program. Superfund sites can be over 100 acres, with 30 contaminant sources and 100 different contaminants, and according to EPA officials, using a look-up table would be too simplistic an approach to remedy selection at such sites. These tables are based on assumptions about exposure to contaminants that they said needed to be verified at more complex sites through a risk assessment. An Illinois official said, however, that the preestablished standards could be appropriate for portions of complex sites, even if they could not be used throughout the site. Other stakeholders were not familiar with the details of the state cleanup standards. However, a representative of RFF said that while the standards may reduce debate about appropriate cleanup levels, there is a tradeoff involved if the standard is not sufficiently protective of public health. Representatives of the Sierra Club said that use of look-up tables to define the end goal can be overly simplistic, and it was important that parties responsible for the original contamination remain liable if events prove that the cleanup to specified standards was not adequate. Regarding reduced requirements for public involvement based on a presumed lack of public interest, an EPA regional official said that low attendance at meetings arranged for public input may be less a reflection of public indifference than a sign that the public has not been sufficiently informed of issues surrounding a contaminated site. This official said that it is necessary to be very proactive in public outreach efforts. For example, he said that it may be necessary to contact churches in order to reach some ethnic communities. A representative of an environmental organization noted that while limiting public involvement may conserve resources in the short term, it may lead to greater costs in the long run if members of the public believe that they have been excluded from the process and decide to litigate. The representative emphasized that the Superfund cleanup process should produce no surprises, and an effective public involvement effort is critical. The representatives of the environmental groups and others that we contacted, such as the Sierra Club, John Snow Institute, RFF, and EPA regions, generally believed that the states’ lack of preference for permanent cleanup remedies and their greater readiness to consider that sites will not be used for residential purposes in the future tend to weaken the long-term effectiveness of site remediation programs. These groups were concerned that nonpermanent remedies, like clay caps designed to isolate contaminants, would not be maintained over time, and that institutional controls, like zoning or deed restrictions needed to prevent the residential or other higher-risk use of sites, would be changed or not enforced. Representatives of the International City/County Management Association (ICMA) said that the land-use and other institutional controls required by nonpermanent remedies require better cooperation and communication between state and local governments than often currently exists. Furthermore, according to an ICMA representative, a recent ICMA focus group indicated that many state and local officials do not fully appreciate the long-term demands—including oversight and enforcement—that institutional controls may place upon local governments. According to an EPA official, some contaminants cannot be contained over the long term (50 to 100 years) and that Superfund’s preference for permanent cleanup remedies is necessary for such long-term protection. In addition, EPA officials said that the costs of long-term operations and maintenance of nonpermanent remedies may partially offset initial cost savings. A report by RFF conducted under a grant from EPA, discussed the implications of basing remedy selection on land-use assumptions. The report stated that land-use categories (such as residential, industrial, and commercial) are used to estimate the future exposure of people to contaminants; yet the relation between land use and exposure is often not known and may vary widely. Anticipating the likely future use of a site is no easy task, according to the report, given the competing interests that want different land uses. The report noted that EPA does not have the authority to ensure that local land-use controls are maintained and enforced over time at sites where residual contamination precludes unrestricted use. Local land-use restrictions are typically the province of local government and private property law. The report observed that land-use controls are subject to various pressures, such as demands for property development, that may limit their effectiveness. Two major challenges result from a cleanup policy linking land use to remedy selection, according to the report: first, how to involve the public more effectively in cleanup and reuse decisions, and second, how to ensure the effectiveness of property-use restrictions when the legal authority for such controls is the private property laws of each state. Some stakeholders were not supportive of the changes Michigan made in its liability provision. A representative of the Michigan Environmental Council said that causation would increase public expense for cleanup and that because there would be fewer responsible parties available for cleanup, fewer contaminated sites would be remediated. According to an EPA official, CERCLA establishes a defense to liability for innocent landowners who obtain property without knowing that it was contaminated, despite taking due care to discover potential contaminants. However, an EPA official acknowledged that owners are generally unable to qualify for this defense because it is rare that an in-depth investigation of a contaminated site would not detect the contamination. While not disagreeing with Minnesota’s policy of assuming the cost of closed municipal landfills, representatives of the Sierra Club said that it is important that the policy not be extended to privately owned landfills because this would burden the taxpayers with costs that are the responsibility of a private party. EPA officials said that Minnesota’s approach is a potentially very costly program from the government’s standpoint, and EPA could probably not afford to adopt such a policy without significant additional funding. Several state officials told us that, although they believe the practices of their state programs facilitate faster and less costly cleanup, they also wanted to stress the importance of an ongoing Superfund program. For example, officials in Massachusetts said that the existence of the federal program, with what they characterized as more daunting requirements and procedures than exist in state programs, was an important element in obtaining the cooperation of responsible parties in the state program. If these parties are not cooperative and the site is sufficiently dangerous, responsible parties risk being brought into the Superfund program. We provided a draft of this report to EPA for its review and comment. We spoke with EPA officials, including the Director of the State, Tribal, and Site Identification Center, in EPA’s Office of Solid Waste and Emergency Response, to obtain the agency’s comments. EPA generally agreed with the description of the Superfund practices presented in the report but made technical comments and corrections, which we incorporated as appropriate. In addition, EPA officials said that the agency’s recent administrative reforms had reduced the cost and duration of Superfund cleanups. The officials provided us with data on the cost and time savings achieved by certain of these reforms,which we included in our report. EPA also believed that the report should highlight the fact that containment remedies require long-term management and monitoring and may fail without such attention. We pointed out that this issue was addressed in the report’s section summarizing stakeholders’ views. We also provided selected portions of this report to officials responsible for the state programs we discussed. We incorporated state comments and corrections as necessary. To identify practices that may facilitate cleanups of hazardous waste sites that are faster or less costly in comparison with the federal Superfund program, we selected seven states—Illinois, Massachusetts, Michigan, Minnesota, New Jersey, Pennsylvania, and Texas—with cleanup programs that are among the largest in the nation and that were recommended by various stakeholders—including EPA, industry organizations, and environmental groups—as states that had implemented time- and cost-saving practices. We then conducted interviews with these states’ program officials, who identified and described program practices that, in comparison with the practices of the federal Superfund program, they believe facilitated faster or less costly cleanup of hazardous waste sites. We also reviewed pertinent laws, regulations, and other available documentation describing these practices. We did not independently verify the officials’ statements regarding time and cost savings. To identify issues that should be considered before these practices would be adopted by the Superfund program, we talked with officials from environmental and local governmental organizations, and EPA regional and headquarters offices. Where possible, we obtained references to these organizations from states and EPA. We interviewed officials in each of EPA’s regional offices whose jurisdiction includes the selected states, environmental groups in the selected states, and national environmental groups. We conducted out work from July through December 1998 in accordance with generally accepted government auditing standards. As arranged with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 10 days from its date. At that time, we will send copies of this report to appropriate congressional committees; interested Members of Congress; the Administrator of EPA; state program managers; and other interested parties. We will also make copies available to others upon request. Please call me at (202) 512-6111 if you or your staff have any questions. Major contributors to this report are listed in appendix IV. We contacted this organization in a preliminary phase of our work in order to obtain its views on the best states to survey. Richard Johnson The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO: (1) identified practices that are both used in selected state programs at sites that may be contaminated enough to qualify for long-term cleanup under the Superfund program and that are believed by state officials to reduce the time and expense of cleanups; and (2) obtained the views of the Environmental Protection Agency (EPA), environmentalists, and other stakeholders about whether the states' practices may be applicable to the Superfund program. GAO noted that: (1) state hazardous waste program officials identified cleanup practices that they believe lead to faster or less costly cleanup of sites and that have been applied at sites that qualify for the Superfund Program, and said that the practices facilitated cleanups in that: (a) some practices promoted faster decisionmaking about how to clean up sites, that is, decisions about which cleanup remedies to use; (b) their programs allow less costly cleanup remedies than the Superfund law requires, which are nevertheless, they believe, protective of health and the environment; and (c) two state programs reduce litigation costs, speed up cleanups, and improve the fairness of the cleanup process by not holding some parties responsible for cleanup who would be liable under the Superfund law; (2) although the officials provided some anecdotal evidence illustrating the benefits of these practices, none could provide a formal assessment of time and cost savings; (3) environmental policy stakeholders that GAO interviewed, including EPA, state and national environmental organizations, and representatives of local governments, generally did not dispute that the state practices identified can facilitate faster or less costly cleanups; (4) because they can reduce costs, the state practices are generally advantageous to private companies and others responsible for cleaning up sites; (5) however, EPA and environmental and local government groups said that applying some of the practices to the Superfund program could have disadvantages; (6) environmental groups, as well as representatives of state and local officials, noted that containment remedies leaving contamination at sites would require control over the use of the sites, such as restrictive zoning, to reduce human exposure to the contaminants; (7) a number of stakeholders, including state officials, said that a lessening of the Superfund program's more rigorous cleanup requirements or liability standards could negatively affect the state programs; (8) they noted that states can refer sites at which parties responsible for cleanup refuse to comply with state requirements to EPA for possible action under the Superfund program; and (9) the belief of responsible parties that the Superfund requirements are more onerous than the states' is a powerful incentive for cooperation with state authorities that might be weakened if the Superfund program became more like the state programs.
FEHBP is the largest employer-sponsored health insurance program in the country, providing health insurance coverage for about 8 million federal employees, retirees, and their dependents through contracts with private insurance plans. All currently employed and retired federal workers and their dependents are eligible to enroll in FEHBP plans, and about 85 percent of eligible workers and retirees are enrolled in the program. For 2007, FEHBP offered 284 plans, with 14 fee-for-service (FFS) plans, 209 health maintenance organization (HMO) plans, and 61 consumer-directed health plans (CDHP). About 75 percent of total FEHBP enrollment was concentrated in FFS plans, about 25 percent in HMO plans, and less than 1 percent in CDHPs. Total FEHBP health insurance premiums paid by the government and enrollees were about $31 billion in fiscal year 2005. The government pays a portion of each enrollee’s total health insurance premium. As set by statute, the government pays 72 percent of the average premium across all FEHBP plans but no more than 75 percent of any particular plan’s premium. The premiums are intended to cover enrollees’ health care costs, plans’ administrative expenses, reserve accounts specified by law, and OPM’s administrative costs. Unlike some other large purchasers, FEHBP offers the same plan choices to currently employed enrollees and retirees, including Medicare-eligible retirees who opt to receive coverage through FEHBP plans rather than through the Medicare program. The plans include benefits for medical services and prescription drugs. By statute, OPM can negotiate contracts with health plans without regard to competitive bidding requirements. Plans meeting the minimum requirements specified in the statute and regulations may participate in the program, and plan contracts may be renewed automatically each year. OPM may terminate contracts if the minimum standards are not met. OPM administers a reserve account within the U.S. Treasury for each FEHBP plan, pursuant to federal regulations. Reserves are funded by a surcharge of up to 3 percent of a plan’s premium. Funds in the reserves above certain minimum balances may be used, under OPM’s guidance, to defray future premium increases, enhance plan benefits, reduce government and enrollee premium contributions, or cover unexpected shortfalls from higher-than-anticipated claims. As of January 1, 2006, Medicare began offering prescription drug coverage (also known as Part D) to Medicare-eligible beneficiaries. Employers offering prescription drug coverage to Medicare-eligible retirees enrolled in their plans could, among other options, offer their retirees drug coverage that was actuarially equivalent to standard coverage under Part D and receive a tax-exempt government subsidy to encourage them to retain and enhance their prescription drug coverage. The subsidy provides payments equal to 28 percent of each qualified beneficiary’s prescription drug costs that fall within a certain threshold and is estimated to average about $670 per beneficiary per year. OPM opted not to apply for the retiree drug subsidy. The average annual growth in FEHBP premiums slowed from 2002 through 2007 and was generally lower than the growth for other purchasers since 2003. Premium growth rates of the 10 largest FEHBP plans by enrollment varied to a lesser extent than did growth rates of smaller plans from 2005 through 2007. The growth in the average FEHBP enrollee premium contribution generally tracked average premium growth and was generally similar to recent growth in enrollee premium contributions for surveyed employers. After a period of decreases in 1995 and 1996, FEHBP premiums began to increase in 1997, to a peak increase of 12.9 percent in 2002. The growth in average FEHBP premiums began slowing in 2003 and reached a low of 1.8 percent for 2007. The average annual growth in FEHBP premiums was faster than that of CalPERS and surveyed employers from 1997 through 2002—8.5 percent compared with 6.5 percent and 7.1 percent, respectively. However, beginning in 2003, the average annual growth rate in FEHBP premiums was slower than that of CalPERS and surveyed employers— 7.3 percent compared with 14.2 percent and 10.5 percent, respectively. (See fig. 1.). The premium growth rates for the 10 largest FEHBP plans by enrollment— accounting for about three-quarters of total FEHBP enrollment—ranged from 0 percent to 15.5 percent in 2007. The average annual premium growth for these plans fell within a similar range for 2005 through 2007. (See table 1.) Premium growth rates across the smaller FEHBP plans in 2007 varied more widely, from a decrease of 43 percent to an increase of 27.1 percent. The average premium growth in 2006 also varied by such characteristics as plan type, plan option, geography, and share of retirees. Premium growth for FFS plans (6.0 percent) was lower than for HMO plans (8.5 percent). Premium growth for low-option plans (2.6 percent) was lower than that for high-option plans (7.3 percent). Premium growth was higher for regional HMO plans in the southern United States (9.2 percent) than for regional HMO plans elsewhere (from 7.2 percent to 8.7 percent). Premium growth for plans with 20 percent or fewer retirees (4.5 percent) was lower than for plans with greater than 20 percent retirees (7 percent). Growth in average FEHBP enrollee premium contributions generally paralleled premium growth from 1994 through 2007. The average annual growth in enrollee premium contributions during this period was 6.9 percent, while premium growth was 6.1 percent. After decreasing in 1995, average enrollee premium contributions began to increase, rising to a peak of 12.8 percent in 1998. Paralleling premium growth trends, the average annual growth in enrollee premium contributions has slowed since 2002, except for an upward spike in 2006. (See fig. 2.) The growth in average FEHBP enrollee premium contributions was generally similar to that of surveyed employer plans. (See fig. 3.) From 1994 through 2006, the average annual growth in FEHBP enrollee premium contributions ranged from a decrease of 1.2 percent to an increase of 12.8 percent, compared with a decrease of 10.1 percent to an increase of 20.9 percent for surveyed employer plans. From 2003 through 2006, the average annual increase in FEHBP enrollee premium contributions— 8.8 percent—was comparable with that of surveyed employer plans. The growth in enrollee premium contributions for the 10 largest FEHBP plans by enrollment ranged from negative 1.1 percent to 51.5 percent in 2007. The growth in enrollee premium contributions for smaller FEHBP plans varied more widely, from negative 62.6 percent to 86.8 percent. Projected increases in the cost and utilization of services and in the cost of prescription drugs accounted for most of the average premium growth across FEHBP plans. However, projected withdrawals from reserves offset much of this growth from 2006 through 2007. Officials we interviewed from most of the FEHBP plans said that the retiree drug subsidy would have had a small effect on premium growth had OPM applied for the subsidy and used it to offset premiums. Our interviews with officials from two large plans and our analysis of the potential effect of the subsidy showed that it would have lowered the growth in premiums and enrollee premium contributions for 2006. OPM officials stated that the subsidy was not necessary because its intent was to encourage employers to continue offering prescription drug coverage to Medicare-eligible enrollees, and FEHBP plans were already doing so. The potential effect of the subsidy on premium growth would also have been uncertain because the statute did not require employers to use the subsidy to mitigate premium growth. Projected increases in the cost and utilization of health care services and the cost of prescription drugs accounted for most of the average FEHBP premium growth from 2000 through 2007. Absent projected changes associated with other factors, projected increases in the cost and utilization of services alone would have accounted for a 6 percent increase in premiums for 2007, down from a peak of about 10 percent for 2002. Projected increases in the cost of prescription drugs alone would have accounted for about a 3 percent increase in premiums for 2007, down from a peak of about 5 percent for 2002. Enrollee demographics—particularly the aging of the enrollee population—were projected to have less of an effect on premium growth. Projected decreases in the costs associated with other factors, including benefit changes that resulted in less generous coverage and enrollee choice of plans—typically the migration to lower cost plans—generally helped offset average premium increases for 2000 through 2007. Officials we interviewed from most of the plans stated that OPM monitored their plans’ reserve levels and worked closely with them to build up or draw down reserve funds gradually to avoid wide fluctuations in premium growth from year to year. Projected additions to reserves nominally increased premium growth—by less than 1 percent—from 2000 through 2005. However, projected withdrawals from reserves helped offset the effect of increases by about 2 percent for 2006 and 5 percent for 2007. (See fig. 4.) According to OPM, increases in the actual cost and utilization of services in 2006 were lower than projected for that year, and therefore the projected withdrawals from reserves were not made in 2006. Because of the resulting higher reserve balances, plans and OPM projected even larger reserve withdrawals for 2007. Detailed data on total claims expenditures and expenditures by service category actually incurred were available for five large FEHBP plans. These data showed that total expenditures per enrollee increased an average of 25 percent from 2003 to 2005. Most of this increase in total expenditures per enrollee was explained by expenditures on prescription drugs and on hospital outpatient services. (See table 2.) Officials we interviewed from several plans stated that the retiree drug subsidy would have had a small effect on premium growth because of two factors. First, drug costs for Medicare beneficiaries enrolled in these plans accounted for a small proportion of total expenses for all enrollees, and the subsidy would have helped offset less than one-third of these expenses. Second, because the same plans offered to currently employed enrollees were offered to retirees, the effect of the subsidy would have been diluted when spread across all enrollees. However, officials we interviewed from two large plans with high shares of elderly enrollees stated that the subsidy would have lowered premium growth for their plans. Officials from one of these plans estimated that 2006 premium growth could have been 3.5 to 4 percentage points lower. Our analysis of the potential effect of the retiree drug subsidy on all plans in FEHBP showed that had OPM applied for the subsidy and used it to offset premium growth, the subsidy would have lowered the 2006 premium growth by 2.6 percentage points from 6.4 percent to about 4 percent. The reduction in premium growth would have been a onetime reduction for 2006. Absent the drug subsidy, FEHBP premiums in the future would likely be more sensitive to drug cost increases than would be premiums of other large plans that received the retiree drug subsidy for Medicare beneficiaries. Officials from OPM explained that there was no need to apply for the subsidy because its intent was to encourage employers to continue offering prescription drug coverage to enrolled Medicare beneficiaries, which all FEHBP plans were already doing. As such, the government would be subsidizing itself to provide coverage for prescription drugs to Medicare-eligible federal employees and retirees. The potential effect of the subsidy on premium growth would also have been uncertain because the statute did not require employers to use the subsidy to mitigate premium growth. Officials we interviewed from most of the plans with higher-than-average premium growth stated that increases in the cost and utilization of services as well as a high share of elderly enrollees and early retirees were key drivers of premium growth. Our analysis of these plans’ financial and enrollee demographic data showed that these plans experienced faster- than-average growth in the cost and utilization of services and faster-than- average growth in their share of elderly enrollees and retirees in recent years. Officials we interviewed from most of the plans with lower-than- average premium growth cited adjustments made for previously overestimated projections of cost growth. Officials also cited benefit changes that resulted in less generous coverage for prescription drugs. Our analysis of financial data provided by two of these plans showed that the increase in their per-enrollee expenditures for prescription drugs was significantly lower than average in recent years. In addition, our analysis of enrollment data found that these plans experienced greater declines than average in their share of aging enrollees. Officials we interviewed from most of the plans with higher-than-average premium growth cited large increases in the actual cost and utilization of services as one of the key cost drivers of premium growth. Our analysis of financial data provided by six of these plans showed that the average increase in total expenditures per enrollee from 2003 through 2005 was about 40 percent, compared with the average of 25 percent for the five large FEHBP plans. Although enrollee demographics were projected to have a small effect on premium growth in the average FEHBP plan for 2006, change in enrollee demographics was cited as a key cost factor for most plans with higher- than-average premium growth. Officials we interviewed from five of these plans stated that an aging population and higher shares of early retirees were factors driving premium growth for their plans. For example, officials from two plans cited a high concentration of elderly enrollees in their respective service areas of southern New Jersey and Pennsylvania, while officials from another plan cited an aging population in its service area of San Antonio, Texas. Our comparison of the demographic characteristics of the eight plans with higher-than-average premium growth with those of all FEHBP plans from 2001 through 2005 supports the officials’ statements that unique demographic profiles contributed to higher premium increases. (See table 3.) Officials we interviewed from most of the plans with lower-than-average premium growth for their plans in 2006 cited adjustments for previously overestimated projections of cost growth. Officials from two of these plans stated that projections for a new low-option plan they had recently introduced were pegged high because of concerns about potential migration of high-cost enrollees from their high-option plan. The actual cost increases of enrollees in the low-option plan in 2004 (the basis for 2006 rates) turned out to be lower than projected. Officials from two other plans said that the projected cost growth of 14 percent to 20 percent in 2004 (the basis for 2006 rates) for those plans was much higher than the actual cost growth in 2006 of about 5 percent to 8 percent. Officials we interviewed from three plans with lower-than-average growth cited lower-than-anticipated rates of increase in prescription drug costs caused by benefit changes that resulted in less generous coverage to explain low rates of premium growth for their plans. Our analysis of financial data provided by two of these plans showed that per-enrollee expenditures for prescription drugs increased by 3 percent for one plan and 13 percent for the other from 2003 through 2005, compared with 30 percent for the average of the five large FEHBP plans. The six plans with lower-than-average premium growth also had greater declines in their share of elderly enrollees compared with all plans from 2001 through 2005. (See table 4.) We received comments on a draft of this report from OPM (see app. II). OPM said the draft report confirms that growth in average FEHBP premiums has slowed and has been lower than that of other large employer purchasers for the last several years. Regarding the projected withdrawals of reserves for 2007, OPM said that the actual drawdown could be lower if the actual increase in the cost and utilization of services in 2007 is less than projected. We agree this could occur, and as we noted in the draft report and as OPM said in its comments, the projected withdrawals of reserves for 2006 were ultimately not made because of lower than expected increases in the cost and utilization of services in that year. Regarding the manner in which premiums are set, OPM said that rate negotiations between OPM and the plans are guided by projections of future costs that are based on a retrospective analysis of actual costs, and that adjustments to the reserve accounts of most plans are made when actual costs differ from the projections. OPM said that, as a result, these reserve adjustments help stabilize premium growth over time and ensure that premiums ultimately reflect actual cost increases. We agree with this characterization of the effect of reserve adjustments. Regarding our discussion of benefit changes that resulted in less generous coverage for prescription drugs, OPM said that some plans modified their prescription drug benefit to create incentives to use generic medications, and that this does not result in a less generous benefit. While we agree that plans can change benefits to encourage generic drug utilization without resulting in less generous coverage, officials from three of the six plans we interviewed with lower-than-average premium growth said that they made benefit changes that resulted in less generous coverage. OPM provided other comments describing aspects of FEHBP and provided technical comments that we incorporated as appropriate. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution of it until 30 days from its date. At that time, we will send copies of this report to the Director of OPM and other interested parties. We will also make copies available to others upon request. In addition, this report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7119 or dickenj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Randy Dirosa, Assistant Director; Iola D’Souza; Menq-Tsong P. Juang; and Timothy Walker made key contributions to this report. To identify growth trends in the average Federal Employees Health Benefits Program (FEHBP) premiums and enrollee premium contributions, we analyzed trend data for 1994 through 2007 from the Office of Personnel Management (OPM). To identify the variation in premium trends across plans by plan characteristics, we analyzed detailed plan-level premium data and enrollment data for 2003 through 2006 from OPM. We examined the variation in premiums based on plan type—fee- for-service (FFS), health maintenance organization (HMO), and consumer- directed health plan (CDHP)—plan option (high option, low option); geography (West, Midwest, South, Northeast); and share of retirees. To compare FEHBP premium trends with those of other purchasers, we obtained premium trend data for 1994 through 2007 from the California Public Employees’ Retirement System (CalPERS)—the second largest public purchaser of employee health benefits after FEHBP—and from surveys of employer-sponsored health benefits conducted by KPMG Peat Marwick from 1993 through 1998 and by Kaiser Family Foundation/Health Research and Educational Trust (Kaiser/HRET) from 1999 through 2006. To identify factors contributing to average FEHBP premium growth trends for all plans, we obtained and analyzed OPM summary reports on the projected effects of various factors on premium growth for all FEHBP plans from 2000 through 2007. We analyzed more detailed data obtained individually from five large FFS plans on actual growth in per-enrollee expenditures by service category, including prescription drugs, hospital outpatient care, hospital inpatient care, and physician and other services, from 2003 through 2005. To examine the reasons for differing premium growth trends among FEHBP plans, we conducted interviews with officials from 14 plans with higher- or lower-than-average premium growth in either 2006 or the 3-year period from 2004 through 2006, and analyzed financial data provided by some of these plans. We limited our study sample to plans participating in FEHBP for at least 3 years and with at least 5,000 enrollees in 2005. Among these plans, we identified those with premium growth for 2006 or the average annual growth for the 3-year period from 2004 through 2006 of above or below one standard deviation of the mean. Of the 23 plans meeting these criteria, we selected 14 plans. (See table 5.) We analyzed aggregate data on the actual growth in per-enrollee expenditures by service category from 2003 through 2005 provided by officials from some of these plans and demographic enrollment data from 2001 through 2005 from OPM. We also explored with officials from OPM and the selected plans the potential effect of the retiree drug subsidy on premium growth had OPM applied for the subsidy and used it to offset premiums. To estimate the effect the subsidy would have had on average premium growth, we first calculated the total annual amount of the subsidy that would have been available for all Medicare-eligible beneficiaries in FEHBP using 2006 enrollment data and an estimate by the Centers for Medicare & Medicaid Services of the average annual subsidy per Medicare beneficiary in 2006 (about $670). We then divided this amount by total annual premiums for all FEHBP enrollees in 2005. We did not independently verify the data from OPM, the selected FEHBP plans, CalPERS, or the Kaiser/HRET surveys. We performed certain quality checks, such as determining consistency where similar data were provided by OPM and the plans. We collected and evaluated information from OPM regarding collection, storage, and maintenance of the data. We reviewed all data for reasonableness and consistency and determined that these data were sufficiently reliable for our purposes. We conducted our work from January 2006 through December 2006 in accordance with generally accepted government auditing standards.
Average health insurance premiums for plans participating in the Federal Employees Health Benefits Program (FEHBP) have risen each year since 1997. These growing premiums result in higher costs to the federal government and plan enrollees. The Office of Personnel Management (OPM) oversees FEHBP, negotiating benefits and premiums and administering reserve accounts that may be used to cover plans' unanticipated spending increases. GAO was asked to evaluate the nature and extent of premium increases. To do this, GAO examined (1) FEHBP premium trends compared with those of other purchasers, (2) factors contributing to average premium growth across all FEHBP plans, and (3) factors contributing to differing trends among selected FEHBP plans. GAO reviewed data provided by OPM relating to FEHBP premiums and factors contributing to premium growth. For comparison purposes, GAO also examined premium data from the California Public Employees' Retirement System (CalPERS) and surveys of other public and private employers. GAO also interviewed officials from OPM and eight FEHBP plans with premium growth that was higher than average, and six FEHBP plans with premium growth that was lower than average to discuss premium growth trends and the variation in growth across plans. Growth in FEHBP premiums recently slowed, from a peak of 12.9 percent for 2002 to 1.8 percent for 2007. During this period FEHBP premium growth was generally slower than for other purchasers. Premium growth rates for the 10 largest FEHBP plans by enrollment ranged from 0 percent to 15.5 percent in 2007, while growth rates among smaller FEHBP plans varied more widely. The growth in average enrollee premium contributions--the share of total premiums paid by enrollees--was similar to the growth in total FEHBP premiums from 1994 through 2006, and was generally comparable with recent growth in enrollee premium contributions for surveyed employers. Projected increases in the cost and utilization of health care services and in the cost of prescription drugs accounted for most of the average premium growth increases for 2000 through 2007. Other factors, including benefit changes resulting in less generous coverage and enrollee migration to lower cost plans, were projected to slightly offset premium increases. In 2006 and 2007, projected withdrawals from reserves significantly helped offset the effect of other factors on premium growth. Officials from most of the plans with higher-than-average premium growth cited increases in the cost and utilization of services as well as a high share of elderly enrollees and early retirees. GAO's analysis of financial and enrollment data found that these plans generally experienced faster-than-average growth in the cost and utilization of services and faster-than-average growth in their share of elderly enrollees and retirees in recent years. Officials from most of the plans with lower-than-average premium growth cited adjustments for previously overestimated projections of cost growth. Officials also cited benefit changes that resulted in less generous coverage for prescription drugs. GAO's analysis of financial data provided by these plans found that that their increase in per enrollee expenditures for prescription drugs was significantly lower than average in recent years. In commenting on a draft of this report, OPM said the draft confirms that growth in average FEHBP premiums has slowed and has been lower than that of other large employer purchasers for the last several years.
The company formation process is governed and executed at the state level. Formation documents are generally filed with a secretary of state’s office and are commonly called articles of incorporation (for corporations) or articles of organization (for LLCs). These documents, which set out the basic terms governing the company’s existence, are matters of public record. According to our survey results, in 2004, 869,693 corporations and 1,068,989 LLCs were formed in the United States. See appendix I for information on the numbers of corporations and LLCs formed in each state. Appendix II includes information on states’ company formation processing times and fees. Although specific requirements vary, states require minimal information on formation documents. Generally, the formation documents, or articles, must give the company’s name, an address where official notices can be sent, share information (for corporations), and the names and signatures of the persons incorporating. States may also ask for a statement on the purpose of the company and a principal office address on the articles. Most states also require companies to file periodic reports to remain active. These reports are generally filed either annually or biennially. Although individuals may submit their own company filing documents, third-party agents may also play a role in the process. Third-party agents include both company formation agents, who file the required documents with a state on behalf of individuals or their representatives, and agents for service of process, who receive legal and tax documents on behalf of a company. Agents can be individuals or companies operating in one state or nationally. They may have only a few clients or thousands of clients. As a result, the incorporator or organizer listed on a company’s formation documents may be the agent who is forming the company on behalf of the owners or an individual affiliated with the company being formed. Businesses may be incorporated or unincorporated. A corporation is a legal entity that exists independently of its shareholders—that is, its owners or investors—and that limits their liability for business debts and obligations and protects their personal assets. Management may include officers—chief executive officers, secretaries, and treasurers—who help direct a corporation’s day-to-day operations. LLCs are unincorporated businesses whose members are considered the owners, and either members acting as managers or outside managers hired by the company take responsibility for making decisions. Beneficial owners of corporations or LLCs are the individuals who ultimately own and control the business entity. Our survey revealed that most states do not collect information on company ownership (see fig. 1). No state collects ownership information on formation documents for corporations, and only four—Alabama, Arizona, Connecticut, and New Hampshire—request some ownership information on LLCs. Most states require corporations and LLCs to file periodic reports, but these reports generally do not include ownership information. Three states (Alaska, Arizona, and Maine) require in certain cases the name of at least one owner on periodic reports from corporations, and five states require companies to list at least one member on periodic reports from LLCs. However, if an LLC has members that are acting as managers of the company (managing members), ownership information may be available on the formation documents or periodic reports in states that require manager information to be listed. States usually do not require information on company management in the formation documents, but most states require this information on periodic reports (see fig. 2). Less than half of the states require the names and addresses of company management on company formation documents. Two states require some information on officers on company formation documents, and 10 require some information on directors. However, individuals named as directors may be nominee directors who act only as instructed by the beneficial owner. For LLCs, 19 states require some information on the managers or managing members on formation documents. Most states require the names and addresses of corporate officers and directors and of managers of LLCs on periodic reports. For corporations, 47 states require some information about the corporate officers, and 38 states require some information on directors on periodic reports. For LLCs, 28 states require some information about managers or managing members on the periodic reports. In addition to states, third-party agents may also have an opportunity to collect ownership or management information when a company is formed. Third-party agents we spoke with generally said that beyond contact information for billing the company and for forwarding legal and tax documents, they collect only the information states require for company formation documents or periodic reports. Several agents told us that they rarely collected information on ownership because the states do not require it. Further, one agent said it was not necessary to doing the job. In general, agents said that they also collected only the management information that states required. However, if they were serving as the incorporator, agents would need to collect the names of managers in order to officially pass on the authority to conduct business to the new company principals. A few agents said that even when they collected information on company ownership and management, they might not keep records of it, in part because company documents filed with the state are part of the public record. One agent said that he did not need to bear the additional cost of storing such information. According to our survey, states do not verify the identities of the individuals listed on the formation documents or screen names using federal criminal records or watch lists. Nearly all of the states reported that they review filings for the required information, fees, and availability of the proposed company name. Many states also reported that they review filings to ensure compliance with state laws, and a few states reported that they direct staff to look for suspicious activity or fraud in company filings. However, most states reported they did not have the investigative authority to take action if they identified suspicious information. For example, if something appeared especially unusual, two state officials said that they referred the issue to state or local law enforcement or the Department of Homeland Security. While states do not verify the identities of individuals listed on company formation documents, 10 states reported having the authority to assess penalties for providing false information on their company formation documents. One state official provided an example of a case in which state law enforcement officials charged two individuals with, among other things, perjury for providing false information about an agent on articles of incorporation. In addition, our survey shows that states do not require agents to verify the information collected from their clients. Most states have basic requirements for agents for service of process, but overall states exercise limited oversight of agents. Most states indicated on our survey that agents for service of process must meet certain requirements, such as having a physical address in the state or being a state resident. However, a couple of states have registration requirements for agents operating within their boundaries. Under a law that was enacted after some agents gave false addresses for their offices, Wyoming requires agents serving more than five corporations to register with the state annually. California law requires any corporation serving as an agent for service of process to file a certificate with the Secretary of State’s office and to list the California address where process can be served and the name of each employee authorized to accept process. Delaware has a contractual relationship with approximately 40 agents that allows them, for a fee and under set guidelines, access to the state’s database to enter or find company information. Agents we interviewed said that since states do not require them to, they generally do not verify or screen names against watch lists or require picture identification of company officials. One agent said that his firm generally relied on the information that it received and in general did not feel a need to question the information. However, we found a few exceptions. One agent collected a federal tax identification number (TIN), company ownership information, and individual identification and citizenship status from clients from unfamiliar countries. Another agent we interviewed required detailed information on company principals, certified copies of their passports, proof of address, and a reference letter from a bank from certain international clients. A few agents said that they used the Office of Foreign Assets Control (OFAC) list to screen names on formation documents or on other documents required for other services provided by their company. The agents said they took these additional steps for different reasons. One agent wanted to protect the agency, while other agents said that the Delaware Secretary of State encouraged using the OFAC list to screen names. One agent felt the additional requirements were not burdensome. However, some agents found the OFAC list difficult to use and saw using it as a potentially costly endeavor. OFAC officials told us that they had also heard similar concerns from agents. Law enforcement officials and others have indicated that shell companies have become popular tools for facilitating criminal activity, particularly laundering money. A December 2005 report issued by several federal agencies, including the Departments of Homeland Security, Justice, and the Treasury, analyzed the role shell companies may play in laundering money in the United States. Shell companies can aid criminals in conducting illegal activities by providing an appearance of legitimacy and may provide access to the U.S. financial system through correspondent bank accounts. For example, the Financial Crimes Enforcement Network (FinCEN) found in a December 2005 enforcement action that the New York branch of ABM AMRO, a banking institution, did not have an adequate anti-money laundering program and had failed to monitor approximately 20,000 funds transfers—with an aggregate value of approximately $3.2 billion—involving the accounts of U.S. shell companies and institutions in Russia or other former republics of the Soviet Union. But determining the extent of the criminal use of U.S. shell companies is difficult. Shell companies are not tracked by law enforcement agencies because simply forming them is not a crime. However, law enforcement officials told us that information they had seen suggested that U.S. shell companies were increasingly being used for illicit activities. For example, FinCEN officials told us they had seen many suspicious activity reports (SAR) filed by financial institutions that potentially implicated U.S. shell companies. One report cited hundreds of SARs filed between April 1996 and January 2006 that involved shell companies and resulted in almost $4 billion in activity. During investigations of suspicious activity, law enforcement officials may obtain some company information from agents or states, either from state’s Internet sites or by requesting copies of filings. According to some law enforcement officials we spoke with, information on the forms, such as the names and addresses of officers and directors, might provide productive leads, even without explicit ownership information. Law enforcement officials also sometimes obtain additional company information, such as contact addresses and methods of payment, from agents, although one state law enforcement official said the agents might tell their clients about the investigation. In some cases, the actual owners may include their personal information on official documents. For example, in an IRS case a man in Texas used numerous identities and corporations formed in Delaware, Nevada, and Texas to sell or license a new software program to investment groups. He received about $12.5 million from investors but never delivered the product to any of the groups. The man used the corporations to hide his identity, provide a legitimate face to his fraudulent activities, and open bank accounts to launder the investors’ money. IRS investigators found from state documents that he had incorporated the companies himself and often included his coconspirators as officers or directors. The man was sentenced to 40 years in prison. In other cases, law enforcement officials may have evidence of a crime but may not be able to connect an individual to the criminal action without ownership information. For example, an Arizona law enforcement official who was helping to investigate an environmental spill that caused $800,000 in damage said that investigators could not prove who was responsible for the damage because the suspect had created a complicated corporate structure involving multiple company formations. This case was not prosecuted because investigators could not identify critical ownership information. Most of the officials we interviewed said they had also worked on cases that reached dead ends because of the lack of ownership information. States and agents recognized the positive impacts of collecting ownership information when companies are formed. As previously noted, law enforcement investigations could benefit by knowing who owns and controls a company. In addition, a few state officials said that they could be more responsive to consumer demands for this information if it were on file. One agent suggested that requiring agents to collect more ownership information could discourage dishonest individuals from using agents and could reduce the number of unscrupulous individuals in the industry. However, state officials and agents we surveyed and interviewed indicated that collecting and verifying ownership information could have negative effects. These could include: Increased time, costs, and workloads for state offices and agents: Many states reported that the time needed to review and approve company formations would increase and said that states would incur costs for modifying forms and data systems. Further, officials said that states did not have the resources and staff did not have the skills to verify the information submitted on formation documents. Derailed business dealings: A few state and some private sector officials noted that an increase in the time and costs involved in forming a company might reduce the number of companies formed, particularly small businesses. One state official commented that such requirements would create a burden for honest business people but would not deter criminals. Lost state revenue: Some state officials and others we interviewed felt that if all state information requirements were not uniform, the states with the most stringent requirements could lose business to other states or even countries, reducing state revenues. Lost business for agents: Individuals might be more likely to form their own companies and serve as their own agents. Agents also indicated that it might be difficult to collect and verify information on company owners because they often were in contact only with law firms and not company officials during the formation process. In addition, some state officials noted that any change in requirements for obtaining or verifying information, or the fees charged for company formation, would require state legislatures to pass new legislation and grant company formation offices new authority. Further, state and private sector officials pointed out that ownership information collected at formation or on periodic reports might not be complete or up to date because it could change frequently. Finally, as noted, some states do not require periodic reports, and law enforcement officials noted that a shell company being used for illicit purposes might not file required periodic reports in any case. Law enforcement officials told us that many companies under investigation for suspected criminal activities had been dissolved by the states in which they were formed for failing to submit periodic reports. In addition, since a company can be owned by another company, the name provided may not be that of an individual, but another company. We also found that state officials, agents, and other industry experts felt that the need to access information on companies must be weighed against privacy issues. Company owners may want to maintain their privacy, in part because state statutes have traditionally permitted this privacy in part to avoid lawsuits against them in their personal capacity. Some business owners may also seek to protect personal assets through corporations and LLCs. One state law enforcement official also noted that if more information were easily available, criminals and con artists could take advantage of it and target companies for scams. Although business owners might be more willing to provide ownership information if it would not be disclosed in the public record, some state officials we interviewed said that since all information filed with their office is a matter of public record, keeping some information private would require new legislative authority. The officials added that storing new information would be a challenge because their data systems were not set up to maintain confidential information. However, a few states described procedures in which certain information could be redacted from the public record or from online databases. In our review, state officials, agents, and other experts in the field identified three other potential sources of company ownership information, but each of these sources also has drawbacks. First, company ownership information may be available in internal company documents. According to our review of state statutes, internal company documents, such as lists of shareholders for corporations, are required in all states for corporations. Also, according to industry experts, LLCs usually prepare and maintain operating agreements as well. These documents are generally not public records, but law enforcement officials can subpoena them to obtain ownership information. However, accessing these lists may be problematic, and the documents themselves might not be accurate and might not reveal the true beneficial owners of a company. In some cases, the documents may not even exist. For example, law enforcement officials said that shell companies may not prepare these documents and that U.S. officials may not have access to them if the company is located in another country. In addition, the shareholder list could include nominee shareholders and may not reflect any changes in shareholders. In states that allow bearer shares, companies may not even list the names of the shareholders. Finally, law enforcement officials may not want to request these documents in order to avoid tipping off a company about an investigation. Second, we were told that financial institutions may have ownership information on some companies. The Uniting and Strengthening America by Providing Appropriate Tools Required to Intercept and Obstruct Terrorism (USA PATRIOT ACT) Act of 2001 established minimum standards for financial institutions to follow when verifying the identity of their customers. For customers that are companies, this information includes the name of the company, its physical address (for instance, its principal place of business), and an identifying number such as the tax identification number. In addition, financial institutions must also develop risk-based procedures for verifying the identity of each customer. However, according to financial services industry representatives, conducting due diligence on a company absorbs time and resources, could be an added burden to an industry that is already subject to numerous regulations, and may result in losing a customer. Industry representatives also noted that ownership information might change after the account was opened and that not all companies open bank or brokerage accounts. Finally, correspondent accounts could create opportunities to hide the identities of the account holders from the banks themselves. Finally, the Internal Revenue Service was mentioned as another potential source of company ownership information for law enforcement, but IRS officials pointed to several limitations with their agency’s data. First, IRS may not have information on all companies formed. For example, not all companies are required to submit tax forms that include company ownership information. Second, IRS officials reported that the ownership information the agency collects might not be complete or up to date and the owner listed could be another company. Third, law enforcement officials could have difficulty accessing IRS taxpayer information, since access by federal and state law enforcement agencies outside of IRS investigations is restricted by law. IRS officials commented that collecting additional ownership and management information on IRS documents would provide IRS investigators with more detail, but their ability to collect and verify such information would depend on the availability of resources. In preparing our April 2006 report, we encountered a variety of legitimate concerns about the merits of collecting ownership information on companies formed in the United States. On the one hand, federal law enforcement agencies were concerned about the existing lack of information, because criminals can easily use shell companies to mask the identities of those engaged in illegal activities. From a law enforcement perspective, having more information on company ownership would make using shell companies for illicit activities harder, give investigators more information to use in pursuing the actual owners, and could improve the integrity of the company formation process in the United States. On the other hand, states and agents were concerned about increased costs, potential revenue losses, and owners’ privacy if information requirements were increased. Collecting more information and approving applications would require more time and resources, possibly reducing the number of business startups and could be considered a threat to the current system, which values the protection of privacy and individuals’ personal assets. Any requirement that states, agents, or both collect more ownership information would need to balance these conflicting concerns and be uniformly applied in all U.S. jurisdictions. Otherwise, those wanting to set up shell companies for illicit activities could simply move to the jurisdiction that presented the fewest obstacles, undermining the intent of the requirement. Mr. Chairman, this concludes my prepared statement. I would be happy to respond to any questions that you or other members of the committee may have at this time. For further information regarding this testimony, please contact me at (202) 512-8678 or jonesy@gao.gov. Individuals making contributions to this testimony include Kay Kuhlman, Assistant Director; Emily Chalmers; Jennifer DuBord; Marc Molino; Jill Naamane; and Linda Rego. Historically, the corporation has been the dominant business form, but recently the limited liability company (LLC) has become increasingly popular. According to our survey, 8,908,519 corporations and 3,781,875 LLCs were on file nationwide in 2004. That same year, a total of 869,693 corporations and 1,068,989 LLCs were formed. Figure 3 shows the number of corporations and LLCs formed in each state in 2004. Five states— California, Delaware, Florida, New York, and Texas—were responsible for 415,011 (47.7 percent) of the corporations and 310,904 (29.1 percent) of the LLCs. Florida was the top formation state for both corporations (170,207 formed) and LLCs (100,070) in 2004. New York had the largest number of corporations on file in 2004 (862,647) and Delaware the largest number of LLCs (273,252). Data from the International Association of Commercial Administrators (IACA) show that from 2001 to 2004, the number of LLCs formed increased rapidly—by 92.3 percent—although the number of corporations formed increased only 3.6 percent. Company formation and reporting documents can be submitted in person or by mail, and many states also accept filings by fax. Review and approval times can depend on how documents are submitted. For example, a District of Columbia official told us that a formation document submitted in person could be approved in 15 minutes, but a document that was mailed might not be approved for 10 to 15 days. Most states reported that documents submitted in person or by mail were approved within 1 to 5 business days, although a few reported that the process took more than 10 days. Officials in Arizona, for example, told us that it typically took the office 60 days to approve formation documents because of the volume of filings the office received. In 36 states, company formation documents, reporting documents, or both can be submitted through electronic filing (fig. 4 shows the states that provide a Web site for filing formation documents or periodic reports). In addition, some officials indicated that they would like or were planning to offer electronic filing in the future. As shown in table 1, in many cases states charge the same or nearly the same fee for forming a corporation or an LLC. In others, such as Illinois, the fee is substantially different for the two business forms. We found that in two states, Nebraska and New Mexico, the fee for forming a corporation may fall into a range. In these cases, the actual fee charged depends on the number of shares the new corporation will have. The median company formation fee is $95, and fees for filing periodic reports range from $5 to $500. Thirty states reported offering expedited service for an additional fee. Of those, most responded that with expedited service, filings were approved either the same day or the day after an application was filed. Two states reported having several expedited service options. Nevada offers 24-hour expedited service for an additional $125 above the normal filing fees, 2- hour service for an extra $500, and 1-hour, or “while you wait,” service for an extra $1,000. Delaware offers same day service for $100, next-day service for $50, 2-hour service for $500, and 1-hour service for $1,000. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Companies, which are the basis of most commercial activities in market-based economies, may be used for illicit as well as legitimate purposes. Because companies can be used to hide activities such as money laundering, some states have been criticized for requiring too little information about companies when they are formed, especially concerning owners. This testimony draws on GAO's April 2006 report Company Formations: Minimal Ownership Information Is Collected and Available (GAO-06-376), which addressed (1) the information states and other parties collect on companies, (2) law enforcement concerns about the role of companies in illicit activities and the information available on owners, and (3) the implications of collecting more ownership information. GAO surveyed all 50 states and the District of Columbia, reviewed state laws, and interviewed a variety of industry, law enforcement, and other government officials. Most states do not require ownership information at the time a company is formed or on the annual and biennial reports most corporations and limited liability companies (LLC) must file. Four of the 50 states and the District of Columbia require some information on members (owners) of LLCs. Some states require companies to list information on directors, officers, or managers, but these persons are not always owners. Nearly all states screen company filings for statutorily required information such as the company's name and an address where official notices can be sent, but no states verify the identities of company officials. Third-party agents may submit formation documents for a company but usually collect only billing and statutorily required information and rarely verify it. Federal law enforcement officials are concerned that criminals are increasingly using U.S. "shell" companies--companies with generally no operations--to conceal their identities and illicit activities. Though the magnitude of the problem is hard to measure, officials said that such companies are increasingly involved in criminal investigations at home and abroad. The information states collect on companies has been helpful in some cases, as names on the documents can generate additional leads. But some officials said that available information was limited and that they had closed cases because the owners of a company under investigation could not be identified. State officials and agents said that collecting company ownership information could be problematic. Some noted that collecting such information could increase the cost and time involved in approving company formations. A few states and agents said that they might lose business to other states, countries, or agents that had less stringent requirements. Finally, officials and agents were concerned about compromising individuals' privacy, as information on company filings that had historically been protected would become part of the public record.
In January 2011, the Secretary of Defense announced that the Army’s end strength would be reduced by 27,000 active duty military personnel beginning in 2015. According to Army officials, initial discussions regarding how these reductions should be made began as part of the 2014-2018 Total Army Analysis, but no decisions on BCT inactivations in the United States were made in 2011. As part of the early 2012 decision to further reduce its active component end-strength to 490,000, the Army determined that it would inactivate at least 8 of its 45 active component BCTs. According to the Army, it began more extensive analysis of its BCT organizational design, which included a series of vignettes addressing the full range of Army missions, simulated combat that examined multiple organizational options, and a strategic analysis focused on the ability of the Army to support force demands in plausible future campaigns. As a result of this analysis, the Army determined that it would inactivate 12 BCTs and reorganize the remaining BCTs by adding a third maneuver battalion to armor and infantry brigades located in the continental United States, among other additional capabilities.organization design, the Army has 45 active component BCTs with 98 maneuver battalions. Under the current Under the new design, the Army will reduce the number of BCTs to 33, but will still maintain 95 maneuver battalions by focusing reductions on headquarters organizations. According to Army officials, this reorganization will allow the Army to maximize its combat power within its reduced end-strength. The Army will conduct these BCT reorganizations concurrently with BCT inactivations and has cited some benefits of doing so, such as cost efficiencies that can be achieved by using inactivating BCTs to provide personnel, equipment, and infrastructure to BCTs that are reorganizing at the same installations. These BCT inactivations and reorganizations will take place within the Army’s stationing process. Stationing includes realignment or relocation, and those actions that determine the population at a particular installation. As such, it may involve activation or establishment, or inactivation or discontinuance, of force structure components at one or more military installations in support of operational requirements. The Army’s stationing process incorporates both a force structure component and an installation component. Army Regulation 5-10 on Stationing establishes the policy, procedures, and responsibilities for Army stationing actions that occur outside of the Base Realignment and Closure process. Under the regulation, the Army Deputy Chief of Staff for Operations and Plans serves as the Army Staff principal proponent for directing and monitoring stationing activities, but a range of other Army staff directorates and organizations also have responsibilities within the process. In addition to providing instruction on administrative procedures for obtaining approval of stationing actions, Army Regulation 5-10 provides a framework for planning stationing actions, to include studying and analyzing feasible stationing options. This framework allows for some flexibility in regard to the factors that should be considered as part of stationing decisions. For example, the regulation identifies 28 stationing planning factors, such as operational considerations, for planners to consider as they identify, analyze, and evaluate stationing options, but recognizes that some planning factors may have little relevance for certain stationing actions. In addition, environmental analysis is generally a necessary element of the Army’s stationing process and Army Regulation 5-10 directs that stationing proposals are to be evaluated for compliance with the National Environmental Policy Act of 1969. The Army’s military value analysis model is a possible input to the Army’s stationing process and has been a consideration in a number of prior stationing decisions, including the 2005 Base Realignment and Closure round. Based on tools used to analyze the military value of installations as part of the Base Realignment and Closure process, the Army developed a military value analysis model to support stationing decisions for six new BCTs that were established as part of the 2007 Grow the Army initiative, and has adapted that model for use in a number of other The Army’s military value brigade stationing decisions since that time.analysis model was developed by the Center for Army Analysis, which reports to the Army Deputy Chief of Staff for Programs, and is a decision analysis tool that is designed to rank-order installations based on attributes that the Army has identified as being operationally important to the type of unit in question for each stationing decision. The Army’s BCT stationing decision regarding the installations at which to inactivate BCTs was informed by a number of quantitative and qualitative analyses, including a military value analysis of installations, an environmental assessment, and a qualitative analysis of different stationing options. The Army also obtained community input through listening sessions held at installations around the country, a practice that is not currently required by Army guidance, but which may provide some benefits for future stationing decisions. In May 2012, the Army established a BCT Reorganization Operational Planning Team for the purpose of conducting, facilitating, and overseeing planning and analysis to determine how to best achieve the targeted BCT reductions. This team was led by the Force Management directorate within the Army Staff and included stakeholders from across the Army Staff and other Army organizations, such as the Office of the Assistant Chief of Staff for Installation Management and Office of the Assistant Secretary of the Army for Installations, Energy, and Environment, among others. Guidance from the Secretary of the Army and Director of Army Force Management identifying a number of specific stationing factors to be considered when assessing stationing options as part of the BCT inactivation decision process was provided to the BCT Reorganization Operational Planning Team. These key stationing factors for the inactivation decision were initially established in an information paper approved by the Secretary of the Army in November 2011 when the Army was planning for the reduction of at least 8 BCTs and then further defined and modified to account for potential BCT reorganizations in 2013. The stationing factors included some factors to be considered as part of the analysis conducted using the Army’s military value analysis model and other stationing factors to be considered as part of analyses occurring outside of the military value analysis model. To inform its BCT inactivation decision, the Army used its military value analysis model to measure the relative value of installations based on the requirements of BCTs. Stationing factors considered under the model’s analysis are specific to an installation’s ability to support a BCT, such as maneuver land availability or housing. These factors, also known as model attributes, were quantified as part of the military value analysis model in order to rank order the installations under consideration according to their military value. The 2013 version of the model used to support the recent BCT inactivation decision scored the 15 installations with BCTs based on 16 attributes identified by the Army as being operationally important to a BCT. Each attribute within the model has a formula or categorical definition that measures a certain characteristic (see appendix III for a full list of the attributes used within the current model and their definitions). The results of the formulas and categorical ratings for each attribute are converted to 0-10 scores for each installation. The attributes are weighted within the model based on their operational importance and ease of change relative to each other. For example, maneuver land is an attribute within the model and is considered to be of high operational importance, but additional maneuver land is not easily attainable, so that attribute is weighted more heavily than an attribute such as quality of life facilities, which can be improved through Army investment. Installations receive a score for each attribute based on collected data and then individual attribute scores are weighted and summed to produce the installations’ overall military value scores. For the recent decision, the 15 installations were then rank-ordered, with higher rankings indicating greater military value for stationing BCTs. Army officials emphasized that the results of the model were used only as a starting point for further analysis and were useful in comparing installations, but that the model cannot account for all of the factors that need to be considered in a complex decision, such as strategic considerations. The Army also identified other stationing factors, which were to be considered outside of the military value analysis model as part of a qualitative assessment of stationing options. These factors address issues beyond a particular installation’s capabilities and infrastructure that are not accounted for within the model, such as strategic considerations or immediate impacts on readiness. Table 1 shows the key factors identified for consideration outside of the military value analysis model as part of the Army decision. While some of the additional factors, such as strategic considerations, were addressed primarily through the qualitative assessment of stationing options, other factors required additional analysis, such as environmental analysis. To comply with environmental regulations and to address one of the stationing factors identified for consideration, the Army conducted a programmatic environmental assessment of the 21 installations and their associated maneuver training areas with the potential to gain or lose 1,000 or more military and civilian personnel due to the planned force reductions and force structure changes—the 15 installations where BCTs are currently stationed and 6 other installations that support major training schools or Combat Training Centers—to identify potential environmental and socioeconomic impacts of planned force reductions. The Army found that no significant environmental impacts were expected as the result of its proposed actions and, as a result, does not anticipate preparing a more detailed programmatic environmental impact statement related to its decisions. Army officials said that additional site-specific environmental analysis may be necessary once the force structure changes at affected installations have been finalized, particularly in instances where installations may experience some growth or where the types of units stationed at an installation may change, but this has not yet been determined. In particular, the Army did not assess the environmental impacts at Fort Benning of restructuring under the alternative that could lead to some degree of force growth as part of the programmatic environmental assessment because, as the assessment stated, there would not be a situation where Fort Benning would see a net increase in soldiers overall due to its lack of sufficient unrestricted maneuver land at that time to support the training needs of additional maneuver units. The Army has since announced that it will be retaining the BCT at Fort Benning, and reorganizing it by adding an additional maneuver battalion and other capabilities. An Army official said that Fort Benning had taken steps to acquire additional maneuver land to mitigate the installation’s training land limitations and in response to a Jeopardy Biological Opinion from U.S. Fish and Wildlife Services prior to the BCT inactivation decision process and related programmatic environmental assessment, but the acquisition was put on hold pending Army force structure and budgetary decisions. Army officials said that the Army still has some decisions to make in regard to the BCT stationed at Fort Benning and the extent that additional environmental analysis or other actions will be required at the installation to mitigate challenges related to the lack of maneuver land. The programmatic environmental assessment found that potentially significant socioeconomic impacts could result at some installations due to the proposed force reductions. In estimating these impacts, the programmatic environmental assessment looked at the socioeconomic impacts of the maximum possible reductions that could occur at the installations, with an estimated loss of up to 8,000 military and civilian personnel at some installations. However, because the Army will be using units and personnel from inactivating BCTs to reorganize the remaining BCTs at installations where possible, the Army projects that the population losses at the installations that are losing BCTs, and thus the projected socioeconomic impacts, will not be as large as the estimates analyzed by the programmatic environmental assessment. Prior to finalizing its analysis related to the environment, the Army provided opportunity for public comment on the draft Finding of No Significant Impact and the programmatic environmental assessment. A 30-day comment period is required by Army regulation, but the Army then voluntarily extended it for an additional 30 days at the request of some communities in order to encourage maximum stakeholder participation. Incorporating the results of the aforementioned programmatic environmental assessment and the military value analysis, as well as other stationing factors considered outside of the military value analysis as described above, the BCT Reorganization Operational Planning Team developed and assessed 10 potential stationing options. Each option was developed to focus on a particular consideration, some of which were identified as key stationing factors, some related to other considerations such as impacts on training, and some options were directly related to the outcomes of the programmatic environmental analysis and the military value analysis. For example, two of the potential stationing options the Army assessed in its qualitative analysis of options were developed to identify those installations where BCT inactivation would result in minimal environmental and socioeconomic impacts, respectively, as identified through the programmatic environmental assessment. In addition, one of the potential stationing options the Army assessed considered inactivating BCTs solely according to the rank order of installations where they are currently stationed based on the results of the military value analysis (i.e., the BCTs would be identified for inactivation at the ten installations with the lowest military value scores under this option) prior to the consideration of other stationing factors. Other options the planning team considered placed primary emphasis on different stationing factors and then incorporated the results of the military value analysis model as a secondary consideration. For example, one option selected installations for BCT inactivation that would result in the lowest overall military construction costs and then, once costs no longer distinguished between the remaining installations, inactivated BCTs at installations based on their military value analysis training rankings. Another option was based on retaining BCTs at installations that would best support the strategic realignment of forces to the Pacific and then, once those strategic considerations were addressed, inactivated BCTs at installations based on their military value analysis rankings. Once the BCT Reorganization Operational Planning Team developed the 10 different options, the team analyzed each option and identified advantages and disadvantages based on the Army’s stationing factors and other considerations, such as impacts to training. For instance, the team found that the stationing option that inactivated BCTs at installations based on the results of the military value analysis would incur an estimated $684 million in military construction costs and did not appear to support the Defense Strategic Guidance regarding a realignment of forces toward the Pacific because it would inactivate one BCT each at installations in Hawaii, Alaska, and Washington. In general, the analysis of the advantages and disadvantages of the various stationing options found that reorganizing BCTs (i.e., adding additional units and personnel) at installations with multiple BCTs without first inactivating a BCT resulted in higher military construction costs because each BCT on the installation would experience growth with no loss of population to offset that growth. For example, according to the Army’s analysis, Fort Hood currently has five BCTs and increasing the size of those BCTs without first inactivating a BCT would have resulted in approximately $243 million in military construction costs, whereas inactivating BCTs at installations with multiple BCTs creates excess facilities capacity to allow for the reorganization of the remaining units on the installation while incurring lower estimated military construction costs. Further, according to the Army, using inactivating BCTs at these locations as the initial source of equipment and personnel for the remaining BCTs where possible is expected to reduce costs related to transportation of equipment and to mitigate some equipment and personnel readiness impacts. Additionally, an official from the office of the Assistant Secretary of the Army for Installations, Energy, and Environment said that inactivating BCTs at single-BCT installations is less efficient because it creates excess capacity without a readily available reutilization or disposal strategy for those facilities. Army officials involved in developing and assessing the stationing options said that minimizing military construction costs became a major emphasis of the analysis because it would be difficult to justify significant increases in military construction costs while reducing the size of the force. The Army’s military construction estimates were developed based primarily on data available from the Real Property Planning and Analysis System, which is an Army database that provides information on excess capacity at installations by aggregated gross square footage and facility type. An Army official said that these estimates are only rough order of magnitude estimates. Other Army officials involved in developing the estimates said that more accurate military construction cost estimates along with estimates of other potential base support costs, such as those relating to information technology or facilities sustainment, could have been provided had they been able to gather data directly from the installations, but non-disclosure agreements limited them to using data from the Real Property Planning and Analysis System. Military construction costs were the only costs specifically estimated for each stationing option, although the stationing options did include measures related to socioeconomic factors and basic allowance for housing. equipment transportation for unit reorganization and for the training of certain units, were considered during the development and assessment of stationing options. According to Army officials, the Army is now developing detailed cost estimates for its selected stationing option as it completes the stationing documentation required under Army Regulation 5-10. Basic allowance for housing is a U.S. based allowance prescribed by geographic duty location, pay grade, and dependency status. It provides uniformed Service members equitable housing compensation based on housing costs in local civilian housing markets within the United States when government quarters are not provided. The BCT Reorganization Operational Planning Team, which transitioned to a Council of Colonels, developed summaries of each of the stationing options with the identified advantages and disadvantages based on the stationing factors, military value analysis results, military construction cost estimates, and projected socioeconomic impacts. The stationing options were then briefed to a 1-and 2-star general officer steering committee, which voted on and screened out five of the stationing options. The remaining five stationing options were then briefed to a 3-star general officer steering committee, which screened out two more stationing options. According to Army officials, the three recommended stationing options that emerged from the general officer steering committees were then submitted to Army senior leaders for a final determination. According to Army officials, all of the stationing options that were considered by the general officer steering committees were presented to senior leaders in case they wanted to revisit a stationing option that was previously screened out and senior leaders also had the ability to adjust the recommended stationing options based on their judgment. Figure 1 shows the key elements of the Army’s BCT inactivation decision process. Officials characterized the final decision by the Secretary of the Army as a hybrid of a couple of the stationing options and the Army stated that principal considerations in the inactivations were the Army’s ability to meet the requirements of the defense strategy, including a rebalancing of forces to the Pacific, minimizing military construction costs, and minimizing immediate readiness impacts. The military value of installations also played a role in the Army’s decision. While the various analyses conducted by the Army played an important role in informing decision makers about the implications of stationing options, according to Army officials, decision makers also utilized military judgment in making the final determination about where to inactivate BCTs. The Army’s BCT stationing decision regarding the installations at which to inactivate BCTs included steps to obtain community input, a practice that may provide benefits for future stationing decisions. The Army conducted listening sessions at installations that had more than 5,000 civilian and military personnel—the 15 installations considered as part of the BCT inactivation decision and 15 non-BCT installations—to give communities an opportunity to provide input to the Army’s force structure reduction decisions. These sessions had a range of attendees, such as local, state, and federal elected officials and civic and business leaders from across the individual communities. The primary focus of the listening sessions was to capture community input for Army leaders to consider as part of the Army’s overall analysis before any decisions were made, as well as to explain the process that the Army would be using to make its decisions. Army Force Management officials described holding community listening sessions as an atypical part of the stationing process. They added that the number and scope of public comments received as part of the programmatic environmental assessment indicated the depth and breadth of the public’s interest in the decision. Additionally, many installation communities specifically requested a public forum to discuss their concerns. An official from the Office of the Assistant Secretary of the Army for Installations, Energy, and Environment said that local communities might see the stationing decision as a potential loss of force structure similar to what could occur during the Base Realignment and Closure process and that it was important for the communities to have a public forum. This Army official stated that the listening sessions provided the communities with an opportunity to express their concerns about the projected socioeconomic impacts of the potential stationing decisions and inform the Army about local community investments made to support the installation and its military personnel. Input from the listening sessions was provided to the general officer steering committees and Army senior leaders as part of their consideration of potential stationing options during the BCT inactivation determination. For instance, Army officials said that officials participating in the general officer steering committees were briefed at a high level on the communities’ primary concerns and given in- depth reports and data the communities provided as the officials assessed the options to help them make informed decisions. In addition, senior leaders were provided with daily and weekly summaries of the listening sessions as they were taking place. These reports included information on the community concerns, media coverage, and key individuals attending each listening session, such as elected officials. Additionally, an Army official involved in developing a stationing option related to minimizing the socioeconomic impacts at installations said that he considered the input from the listening sessions as he developed the stationing option. Several Army officials told us that they believed that the listening sessions were a valuable tool to support the Army’s overall BCT inactivation decision process and could serve as good precedent for future stationing decisions. The stationing decision framework presented in Army Regulation 5-10 includes local community impact as one of the stationing factors that should be considered and requires a community impact analysis for stationing proposals with a strength change of 200 or more personnel unless a substantially similar analysis was already completed in the context of analyses under the National Environmental Policy Act. Analysis conducted under the National Environmental Policy Act and implementing regulations often includes opportunity for public comment. The community impact analysis required under Army Regulation 5-10 addresses the impacts of changes in population, personal income, tax base, and employment, and may include an examination of the effects on local businesses, schools, housing, and other public services and economic factors. It is based on analysis generated from an economic forecasting model. However, Army Regulation 5-10 does not provide guidance for or discuss obtaining community input in the stationing context as part of developing community impact analyses, such as when community listening sessions or similar efforts to obtain community input that are beyond the scope of environmental analyses should be considered as part of a particular stationing decision. An Army Force Management official said that he is currently developing proposed guidelines for when community listening sessions should be proposed for consideration by Army senior leaders in making stationing decisions, but was uncertain how such guidelines would be incorporated into the stationing process and related guidance. Principles for effective stakeholder participation have shown that effective stakeholder involvement includes actively soliciting stakeholder input from those potentially affected by a decision, involving stakeholders early and throughout the decision-making process, and fostering responsive, interactive communication between stakeholders and decision makers. Incorporating this type of communication with external stakeholders into its stationing process could help to ensure that the Army takes into account the views of external stakeholders and lead to potentially greater buy-in from local communities for Army stationing decisions. The Army has used its military value analysis model to inform several recent stationing decisions and Army officials expect that the model will be an enduring tool in stationing decisions. While the Army has taken steps to validate the model, it has not yet formalized the use of the military value analysis model within its stationing process by establishing guidance related to the use of the model, including guidance related to when the model should be used for stationing decisions or the processes through which key aspects of the model are reviewed, updated, and approved for each use of the model, and data collected and validated. Internal control standards state that appropriate policies and procedures are needed for an agency’s activities, and that relevant objectives and associated risks for each activity should be identified along with the Key practices for control activities needed to address those risks. successful transformations state that stakeholders in public sector transformations are concerned not only with the decisions made but also the process used to make those decisions. The military value analysis model has been used to inform several stationing decisions since 2005, such as the stationing of additional BCTs related to the Grow the Army initiative in 2007 and the stationing of an aviation brigade in 2009. However, Army officials said that the Army has not formally established in guidance the circumstances under which the model would be used or how the model should be considered as a factor within the stationing process. Army Force Management officials said that the Army generally has used the model in stationing decisions with a large impact, potentially greater risk, and requirement for more rigorous analytical underpinning, such as in stationing decisions involving brigade combat teams. One official added that the Army will likely continue to utilize the model in future stationing decisions of a similar nature. Conversely, Army officials said that for stationing decisions related to smaller units, using the military value analysis model may be too labor intensive and thus may not be an appropriate use of resources. In 2010, as part of a prior review GAO conducted of the military services’ stationing processes, Army Force Management officials told GAO that the Army would incorporate military value analysis into Army Regulation 5-10. However, as of our current review, it has not yet done so. The Army has documented its use of the military value analysis model in reports and briefings, but has not incorporated in its stationing regulation or other guidance any discussions of when the use of the model would be warranted and how the model should be used in stationing decisions. According to Army Force Management officials, the Army plans to include a discussion of the military value analysis model in a pamphlet it has been developing to supplement Army Regulation 5-10. Officials said that the pamphlet may include when a military value analysis versus other types of analyses should be conducted within the stationing process, but they have not yet determined what other information related to the model will be included. Further, the pamphlet has been in draft form for more than two years, and the timeframe for its approval and release has not yet been determined. Without formalizing the model within the Army’s stationing process, such as documenting in guidance the circumstances under which the model would be used to support stationing decisions and how the results of the model are considered as part of the broader stationing process, the model’s role within the stationing process may not be transparent and it may not be clearly known how the results of the model are used to inform decisions. The Army has taken steps to ensure the validity of the military value analysis model and its results, but has not established consistent formal processes to guide how (a) the attributes of the model should be reviewed and selected for use in the model, (b) attribute definitions should be reviewed to determine if they are still relevant for a particular decision and updated, and (c) data should be collected and validated. The Army also lacks guidance related to the level of input or approval that is necessary for changes to key elements of the model, and how non- contiguous training areas should be treated within the model. Internal control standards state that control activities, such as consistent processes or policies, can help to ensure that actions to mitigate risks are carried out. In addition, control activities are essential for achieving effective and efficient program results, and include the clear assignment of stakeholder responsibilities. The Army has taken steps to validate the military value analysis model and its results, such as involving key stakeholders and reviewing the relevancy of key elements of the model for specific stationing decisions. According to Center for Army Analysis officials, the military value analysis models used in the 2005 Base Realignment and Closure round and the 2007 Grow the Army initiative stationing decisions were thoroughly vetted within the Army and reviewed and approved by senior leaders. In addition, Center for Army Analysis officials said that the model used for the 2007 Grow the Army decisions was validated by the Naval Post Graduate School, and that each version of the model is briefed to an analytical review board within the Center for Army Analysis. In general, each use of the military value analysis model begins with an examination of the most recently used model. For example, for 2007 Grow the Army initiative stationing decisions, the Army began by reviewing the attributes used in the model that supported the 2005 Base Realignment and Closure round and selected and developed attributes that were specific to the requirements of stationing a BCT. The military value analysis models used in several BCT stationing decisions in recent years have been adapted from the model used for the 2007 Grow the Army initiative stationing decisions. In addition, each time the model has been used, the Center for Army Analysis has conducted a sensitivity analysis to determine the extent to which the weighting of the attributes changes the resulting military value rankings of the installations. According to Center for Army Analysis officials, this allows them to test the impact that any one attribute has on the results of the model and thus be able to identify how any potential flaws in the attribute definitions or data could affect the model and look for opportunities to mitigate them. In the most recent version of the model, the sensitivity analysis did not affect the 6 top-ranked installations or the bottom-ranked installation, which, according to Center for Army Analysis officials, indicates that the ranking of those installations within the model were not affected by any one attribute. Center for Army Analysis officials explained that rankings did change for certain installations during the sensitivity analysis because there was little deviation in the model’s scores for those installations. For example, 7 installations scored within .11 points of one another on a 10-point scale. Overall, for the 2013 version of the model used to support the BCT inactivation decision, only 1.27 points separated the top-ranked installation from the bottom-ranked installation, which, according to Army officials, indicates that all of the installations considered within the model have fairly comparable military value for supporting a BCT. Several factors affect the need to review the model each time it is used. According to Center for Army Analysis officials, a new stationing decision may require different attributes to be included in the model because it may involve different types of units and installations or seek to address a different stationing scenario. The weighting of the attributes within the model may require review because the relative importance of specific attributes may change depending on those attributes’ importance to the new stationing decision. The definitions for existing attributes can also change over time depending on a number of factors, such as updates to Army policy, availability of data, technological advances, or the question the model seeks to address. While the model utilizes quantitative comparison, some aspects of the model are subjective. For example, decisions about which attributes to include within the model and the weight of those attributes are determined by stakeholders who utilize military judgment in deciding what attributes are important for the specific decision and their relative importance within the model. Additionally, the attribute definitions, including associated formulas and categorical ratings, are developed based upon subject matter expertise. Because of the subjectivity of some aspects of the model, its continued validity is largely dependent on the involvement of key stakeholders, such as subject matter experts and Army leaders. Additionally, due to the potential for changing uses of the model, Center for Army Analysis officials said that the model is reviewed by subject matter experts at the beginning of each use to determine if the attributes are still relevant for the stationing decision and if any aspects of the model should be changed, such as how the attributes are defined. The Army has not established consistent formal processes for (a) reviewing the attributes to determine which attributes should be used in the model, (b) reviewing and updating attribute definitions to determine if they are still relevant for a particular decision, and (c) collecting and validating the data for use in the model. We found that the Army took steps to review the relevancy of the attributes in developing the version of the model used to support the BCT inactivation decision, but we identified a couple of instances where further review and updates to the attribute definitions could have been beneficial. Also, although Army Force Management issued direction related to data collection and validation for the version of the model used to support the BCT inactivations, we found some instances of inconsistency related to how data were updated that indicate that a consistent process formalized through established guidance could better ensure that expectations for stakeholders involved in data collection are clear and that the data is current. When force reductions were first announced in 2011, Army Force Management officials met with subject matter experts to identify attributes to be used in a new version of the military value analysis model to support stationing decisions related to these force reductions. The working group used the 14 attributes used in the 2010 version of the model as a baseline and began an initial effort to review the 40 attributes used in the According to 2005 Base Realignment and Closure round for additions.Army Force Management officials, some of the additional attributes that were considered for the model were rejected because the characteristics were accounted for in other analyses or existing attributes, the data was not readily available, or did not clearly distinguish between the installations. Additionally, the 14 attributes used in the 2010 version of the model already represented attributes that were important to a BCT. While the Army considered alternatives, Army Force Management officials said that time constraints and a desire to maintain consistency with the attributes used in prior BCT stationing models diffused enthusiasm for including new attributes or removing attributes from the model. Ultimately, for the 2011 model, Army Force Management decided to include the 14 attributes used in the 2010 model and added one attribute previously used as a screening measure within the model. The Army collected data for these 15 attributes and the preliminary results of the 2011 model were briefed to senior leaders. However, the Army did not make any decisions on force reductions or BCT inactivations in 2011. According to an Army Force Management official, with the public release of the programmatic environmental assessment in January 2013, a decision on BCT reorganization and inactivation appeared imminent and the Army focused on updating and validating the data used in the 2011 model. Army officials viewed the 2013 model as a continuation of the 2011 model with the assumption that the attributes in the 2011 model were still valid. Army officials said that they added one attribute, geographic distribution, to the 2013 model based on guidance from the Secretary of the Army to include it in the Army’s decision making process. While some Army officials raised concerns about the applicability of the buildable acres attribute in a force reduction scenario, an Army Force Management official told us that the attribute was kept in order to preserve potential for growth regeneration in the Army. The official additionally said that there was potential for growth at certain installations resulting from the BCT inactivation and subsequent reorganization. Further, Army officials said that the Army does not have a formal process for reviewing and updating the attribute definitions in coordination with subject matter experts to determine if they are still the best way to measure a particular attribute and we found a couple of instances where further review of and updates to the definitions could have been beneficial. For the 2013 model, an Army Force Management official said that the Army did not deliberately engage subject matter experts in discussions regarding the attribute definitions. The Army focused on updating and validating the data collected in 2011 for use for the 2013 model, which Army officials viewed as a continuation of the 2011 model. Further, these officials told us that they generally rely on subject matter experts to suggest proposed changes or updates to the attribute definitions when necessary and noted that some subject matter experts are more assertive in this regard than others. Officials at the Center for Army Analysis and Army Force Management said that suggestions from subject matter experts are addressed if they are compelling. For instance, the attribute definition for the connectivity attribute was updated for the 2010 model that supported the stationing of a heavy BCT based on a Subject matter experts suggestion made by the subject matter expert.for many of the attributes we spoke with said that they were comfortable with the attribute definitions related to their areas of expertise. However, we did find a couple instances where subject matter experts identified the need to update or review attribute definitions. For example, the subject matter expert for the connectivity attribute said that technological advances in cellular coverage had rendered one of the three sub-factors within the attribute’s definition moot as all installations would receive the same score for that sub-factor. This subject matter expert said that he informed Army Force Management that the attribute definition needed to be examined for future uses of the model as there was no time to effectively address the issue for the current model. Additionally, the subject matter expert for the family housing attribute noted that it may be a good idea to revisit the definition for the attribute for future uses of the model for a couple of reasons, including the attribute’s data source. As the attribute is currently defined, it utilizes data from a housing study that is conducted in various years for individual installations and, as a result, it is possible that the data may not reflect the current housing situation at some installations. For example, housing data used in the model came from housing studies that were published between 2009 and 2012. Further, we found that the formula for the family housing attribute was calibrated to prior force growth scenarios at installations in that it included the specific addition of a heavy BCT as part of the calculation of the availability of housing and was not updated for the current version of the model. Center for Army Analysis officials told us that they were unclear as to why the attribute definition would include the addition of a BCT for this particular scenario, but said that they did not believe this would affect the relative scores of the installations because the same factor was added for all installations. It is unclear whether reviews and updates to the attribute definitions in these instances would have affected the relative scores of the installations or whether, after review, the Center for Army Analysis and subject matter experts would have determined that changes were indeed necessary. However, these examples indicate that it may be beneficial to have an established process in place to review attribute definitions to determine whether adjustments are needed. Moreover, the Army has not established a consistent formal process for collecting and validating the data used in the model each time the model is used. In lieu of such a process, to update the data for the model used to support the BCT inactivation decision, Army Force Management instructed the subject matter experts for each of the attributes to update and validate the data that was used in the 2011 version of the model and directed them to, among other things, coordinate at the installation level to ensure accuracy with the installation-level data. An Army Force Management official said that the 2011 data sheets for the attributes were also sent to the installations through the senior maneuver commander at the installation, but the official said that the process for distributing the data sheets may have varied by installation. In communications to both the organizations providing data and the installation commanders, Army Force Management officials emphasized the importance of ensuring that the data used in the model was consistent with that at the installation level. Despite this direction, we found some inconsistencies in the process that was used to update and validate the data for the 2013 model. We found that the subject matter experts responsible for updating and validating the data underlying each attribute in the model that we spoke with were generally confident with the accuracy of the data that they provided, but subject matter experts differed in the extent to which they coordinated with installations to update and validate the data. While subject matter experts for a few of the attributes said that they coordinated directly with the installations to validate data, some subject matter experts said that they did not coordinate with the installations because they had other sources for obtaining data, such as Army databases, program records, or the use of mapping tools that they believed to be sufficient. Given how some of these attributes are formulated and the data sources used, these data sources may have been the best sources for providing the most consistent and reliable data across installations. Subject matter experts who utilized Army databases for one of the attributes said that different factors may affect the quality of some of the data in the systems at a given time, but noted that these systems have annual validation processes in place that are meant to keep the data accurate and up-to-date. The subject matter expert for another attribute said that he obtained data from studies and a database that are kept up to date with information, gathered from the installations, that he believes to be fair and objective. He additionally said that he did not coordinate directly with the installations to collect data in prior stationing decisions but did so for this stationing decision in response to instruction from Army Force Management. In doing so, he said that he had to exercise judgment in determining which inputs to accept from installations because installations may not always be objective given their interest in receiving the best rating possible. Additionally, the subject matter expert for three of the attributes said that he expected the installations to contact him in response to the data sheets from 2011 that Army Force Management had sent to the installations if the data needed to be updated. While some installations did respond and the subject matter expert said he updated data for those installations and validated it using other data sources to ensure consistency, he said that he did not review or update the data for these attributes using available data systems for, or coordinate directly with, other installations that did not respond. This official noted that the data for one of these attributes is fairly static and, in general, most changes in the data set for the model made by the installations are related to actions that his office would be aware of and thus are not surprising. An Army Force Management official said that some installations could have been proactive and provided data updates in certain instances, but that it was incumbent upon the subject matter experts to update and validate the data in coordination with the installations. Ultimately, this official said that the data’s accuracy and the decision about whether coordination with the installations was needed on the individual attributes were decided by the subject matter experts. Data collection may be more challenging in some instances because the Army does not routinely collect and maintain the data used for certain attributes, such as buildable acres or indirect fire. For example, the subject matter expert for one attribute said that he was not able to update or validate the data for the attribute because the level of analysis that would have had to be conducted in coordination with the installations could not be completed within the timeframes of the request. Thus, he informed Army Force Management that the existing data was the best data available within the timeframes identified to update the data. However, the subject matter expert told us that the data may not be reflective of the current status at the installations because of Army facilities planning policy changes and military construction that may have occurred at the installations since the data was last updated. It is unclear whether data obtained based on additional coordination with the installations in these instances would have been significantly different than the data that was obtained under the current approach. While Army Force Management did issue direction related to the data collection effort for the 2013 model in the absence of an existing process, an Army Force Management official said that the model would benefit from a consistent process for updating data so that guidance and expectations are clear for all of the stakeholders involved in the process. A process would also help to ensure that data is reviewed and updated for each use of the model. For example, one subject matter expert who updated data for an attribute for the 2013 model said that the data for this attribute had not been updated since 2004 when it was used to support the Base Realignment and Closure decisions, even though the attribute has been used in more recent stationing decisions. This subject matter expert said that the data does not change much from year to year, although his data collection effort did result in changes to the prior data. An Army Force Management official who oversaw the use of the model in the BCT inactivation decision said that it would be valuable for the Army to have formal processes that allow for time to review and update the attributes within the model when the model is used, including a more deliberative analysis of the attributes in terms of how they are defined and measured and a process for updating data. The official also noted that the lack of a process for periodic review and updates to the model puts subject matter experts in the difficult position of raising issues when the pressure to update data is at its greatest. A couple of the subject matter experts we spoke with noted that potential upcoming changes in their areas of responsibility would likely result in the need to make changes to the attributes in the future. Without deliberate processes that allow for time to review attributes and attribute definitions in coordination with subject matter experts and consider necessary updates to the model, potential issues could remain unaddressed throughout each use of the model, and necessary changes to the model might not be made, leading to a reduced relevance of the model to the current environment and in future uses. Further, without a consistent formal process for collecting and validating data each time the model is used that ensures consistency with data at the installation level and allows for time to update the data, the Army risks not having the most current and accurate data for use in the military value analysis model. One of the model’s assumptions is that the attributes and weighting within the model reflect current senior Army leader priorities. However, the Army has not established clear guidance for when review of the key elements of the model or changes to the model require higher-level input or approval. Center for Army Analysis officials said that, in general, removing or adding model attributes should be approved by higher level officials, such as a general officer steering committee, because they provide a broader perspective on the Army’s priorities. For example, in discussions with a subject matter expert about whether an attribute should be removed, an official from the Center for Army Analysis suggested that a general officer steering committee should be convened in order to drop the attribute from the model. Also, for the recent BCT inactivation decision, the Army held a three-star general officer steering committee specifically to review and update the weighting of the attributes within the model, which the Secretary of the Army then approved. However, Center for Army Analysis officials said that, while prior versions of the model were approved by various chains of command, there is no specific threshold for holding a general officer steering committee and holding one may not always be feasible if the model is being used under constrained timeframes. Further, Army officials indicated that whether a general officer steering committee is needed is based on the risks and potential impacts related to the decision. For example, Center for Army Analysis officials said that they recommended using a general officer steering committee for the 2013 use of the model because of the sensitivity of the BCT inactivation decision and because the model was being used in a reduction scenario, in contrast to a growth scenario as in previous models. By contrast, the same officials said that a general officer steering committee may not be needed for smaller scale decisions. During our review, an Army Force Management official expressed concern about making any significant changes to the model, such as removing any of the attributes within the model, without a compelling reason. The official explained that because the Army had only recently described the attributes used within the model to Congress in its March 2011 Report to Congress, Army Stationing Decisions, they believed that external stakeholders might perceive any changes to the model so close to a significant stationing decision as being arbitrary or as if the Army was attempting to manipulate the results of the model to influence a desired outcome. Key practices for successful transformations state that the demand for transparency and accountability needs to be accepted in any public sector transformation and stakeholders are concerned not only with the decisions made but also the process used to make those decisions. Caution related to making changes to the key elements of the model for such a sensitive decision is understandable, but without establishing transparent and consistent policies and guidance around the model, Army concerns about how changes to the model may appear to external stakeholders are likely to continue. Further, without consistent formal processes through which key aspects of the model are reviewed and updated, and guidance that establishes the circumstances under which changes to the model require input or approval from Army leaders, the Army risks potential decline in the rigor and consistency of the model over time. There are five attributes related to training within the military value analysis model— airspace, maneuver land, range sustainability, training facilities, and indirect fire. associated installations depends on the stationing decision. For example, in the model used to support the 2007 Grow the Army initiative stationing decisions, the Army identified Yakima Training Center, located in Washington state, as a stand-alone installation because it was considering stationing a BCT at Yakima. In contrast, in the 2013 version of the model supporting the BCT inactivation decisions, Army training officials told us that Yakima should be considered as part of Joint Base Lewis McChord because Yakima’s primary purpose is to support the training of units assigned to the installation. Despite the potential for different treatment of these non-contiguous areas in different stationing decisions, the Army has not established a clear and consistent policy in this regard. Army training officials said that subject matter experts that provide data for the installations do not have a holistic view of the model and their individual views on whether to include non- contiguous areas may differ depending on their area of expertise. Additionally, communication between Army officials indicated that subject matter experts raised questions about whether certain non-contiguous training areas should be combined with the installation for some of the attributes, such as indirect fire, within the 2013 model. Without a consistent policy, the Army has wavered in how to deal with this issue. For example, there has been a lack of clarity regarding the extent to which Joint Base Lewis McChord and Yakima should be aggregated and for what attributes within the military value analysis model. In a 2010 version of the model used to determine where to station a heavy BCT and a fires brigade, as well as the 2011 interim model, the two locations were aggregated for the airspace attribute. However, a 2012 interim version of the model that was prepared for senior Army leaders did not aggregate Yakima with Joint Base Lewis McChord for this attribute. As a result, Joint Base Lewis McChord received a lower military value score relative to other installations in this 2012 interim version of the model than it had in prior versions of the model. After reviewing the results of the interim model, an Army Force Management official said that the goal was to maintain consistency with how non-contiguous training areas were treated in the past and to treat all installations with non-contiguous training land the same. However, a lack of clarity still remained for the airspace attribute leading up to the BCT inactivation decision and the Center for Army Analysis ran two different versions of the model, one that aggregated airspace data for the two locations and one that excluded airspace data for Yakima. The two versions of the model produced different military value scores and rankings for Joint Base Lewis McChord. Ultimately, the version of the model used to support the BCT inactivation stationing decision did not aggregate the two locations for the airspace attribute. Army training officials said that the Army has long struggled with how to treat non-contiguous training areas in models to best simulate reality. These officials said that clear guidance prior to data collection is needed to ensure that non-contiguous training areas are treated consistently within the military value analysis model. Without establishing a clear policy and communicating it to subject matter experts regarding how non- contiguous training areas should be treated in the model for specific attributes, the Army risks inconsistent consideration of these non- contiguous training areas across installations and attributes, which could influence the results of the model. The Army recognizes that its decision to meet part of its planned active component force reductions through the inactivation of 10 BCTs currently stationed in the United States, coupled with the reorganization of the remaining BCTs in the continental United States, will have strategic, operational, and cost implications. The decision will also alter existing demands on the infrastructure, services, businesses and other aspects of communities surrounding the affected installations. Thus, the Army carried out a variety of analyses in order to inform its decision, emphasizing key considerations such as supporting the strategic focus on the Pacific, minimizing additional military construction costs, and minimizing immediate readiness impacts. In addition, concerns about the implications for local communities led to the Army obtaining input through open meetings with communities around installations being considered for stationing changes, which could provide useful lessons for obtaining stakeholder support in future stationing decisions. The Army has indicated it may use such meetings prior to future force structure changes, but without assessing and establishing in guidance when it is appropriate to obtain community input and how such efforts should be conducted, the Army may miss opportunities to obtain input from communities, and the installations themselves and their surrounding communities may lack insight into the Army’s decisions on force structure and stationing. Similarly, the Army’s use of its military value analysis model is consistent with its use of the model for making previous major stationing decisions. However, other actions the Army could take would improve the model’s analytical rigor, credibility, and transparency, and mitigate risk. For instance, without formalizing the military value analysis model in its stationing process guidance or as part of other guidance, including when it should be used and how it should be considered within the stationing process, the transparency of the model’s role in stationing decisions may be limited. Further, without established processes through which key aspects of the model are reviewed and updated, and data collected and validated, as well as guidance related to the level of approval required for changes to the key elements of the model and how non-contiguous training areas should be considered within the model, the Army and external stakeholders may lack certainty as to the model’s analytic rigor and stakeholder buy-in could be limited. Taking action now could help the Army balance the need to ensure a methodologically sound and rigorous process while considering both resources and risk to ensure that stakeholders, including affected communities and installations, can provide input into and understand the basis for its stationing decisions. We recommend that the Secretary of the Army take the following five actions to improve the stationing process: To obtain input from communities and installations affected by significant stationing decisions, we recommend that the Secretary of the Army direct the Deputy Chief of Staff for Operations and Plans to develop and implement guidance related to when community listening sessions or other similar efforts to obtain community input should be conducted and incorporated as part of the Army’s process for making future stationing decisions. To better ensure the Army military value analysis model’s analytical rigor and credibility, minimize risk, and further enhance the transparency of the process used to make stationing decisions, we recommend that the Secretary of the Army direct the Deputy Chief of Staff for Operations and Plans, in coordination with the Center for Army Analysis, to take the following four actions to formalize the model as part of its stationing process: Develop and implement guidance that establishes the circumstances the model should be used in stationing decisions and update stationing regulations or related documents accordingly; key elements of the model or changes to the model require input or approval from Army leaders, such as through the use of a general officer steering committee; and, non-contiguous training areas should be considered within the model that are specific to the stationing decision under consideration and communicate those policies to subject matter experts. Establish and implement through guidance consistent formal processes through which attributes and attribute definitions will be deliberately reviewed and updated for use in the model, in coordination with subject matter experts, and data will be collected and validated for these attributes. We provided a draft of this report to the Department of Defense for review and comment. The Department of the Army provided written comments. The Army concurred with all five of our recommendations and cited plans to issue guidance through Army Pamphlet 5-10, which is currently being developed to supplement Army Regulation 5-10, to address our recommendations. The Army’s comments are reprinted in their entirety in appendix IV. In addition, the Army provided technical comments, which we have incorporated into the report as appropriate. The Army concurred with our recommendation to develop and implement guidance related to when community listening sessions or other similar efforts to obtain community input should be conducted as part of the Army’s process for making future stationing decisions. The Army stated that it values community input into important decisions that impact soldiers, civilians, families and local communities, and is planning to issue guidance directing that stationing actions that meet a specific threshold will include a staff recommendation for the Secretary of the Army on the use of community meetings as a means to gather public input. We believe that this is a positive step that will position the Army to take advantage of opportunities to obtain community input for relevant future stationing decisions. The Army also concurred with our recommendations related to the military value analysis model. Specifically, the Army concurred with our recommendation to develop and implement guidance that establishes the circumstances under which the model should be used in stationing decisions. It noted that the military value analysis model is an important decision support tool that it has used to inform all significant stationing actions since the Base Realignment and Closure round in 2005. The Army stated that it will issue guidance directing that the model will be used to inform stationing decisions involving the activation, inactivation, or relocation of a brigade size unit or other units that meet a certain threshold. The Army additionally concurred with our recommendation to develop and implement guidance that establishes the circumstances under which key elements of the model or changes to the model require input or approval from Army leaders, stating that it would issue guidance directing that significant changes to the military value analysis model will be reviewed by a general officer steering committee, chaired by the Director of Force Management, prior to approval. We believe these actions will enhance the transparency of the model’s role within the stationing process and, to the extent that the guidance defines what constitutes significant changes to the model, the process used to make changes to the model, as well as better ensure the model’s rigor and mitigate risk related to key decisions. Further, the Army concurred with our recommendation to develop and implement guidance that establishes the circumstances under which non- contiguous training areas should be considered within the model that are specific to the stationing decision under consideration and communicate those policies to subject matter experts. The Army stated that the quality and quantity of training resources are important considerations in stationing decisions, although not all training areas are equally accessible, and that when non-contiguous training areas are included in the military value analysis model, the assigned attribute score should reflect all relevant aspects of the training area. In this regard, the Army stated that it will issue guidance directing that the military value analysis model attribute scores for installations with non-contiguous training areas include a statement explaining the manner in which the non-contiguous nature of the training area was given due consideration in the applicable attribute scores. This planned action will provide greater transparency with regard to how non-contiguous training areas are considered for specific attributes. Additionally, the Army concurred with our recommendation to establish and implement through guidance consistent formal processes through which attributes and attribute definitions will be deliberately reviewed and updated for use in the model, in coordination with subject matter experts, and data will be collected and validated for these attributes. The Army noted that technology, tactics, and business practices are constantly changing and improving and that, therefore, the military value analysis model attributes should be regularly reviewed and, when appropriate, updated. Along these lines, the Army stated that it will issue guidance directing a regular review and update of the military value analysis model attribute definitions and data, with reviews and updates occurring a minimum of every two years. We believe that the Army’s plan to establish regular reviews will better ensure that the model attribute definitions and data used in the military value analysis model remain relevant and up-to- date for a changing environment while balancing the Army’s concerns about time constraints surrounding certain stationing decisions. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution of this report until 30 days from the report date. At that time, we will distribute this report to the Secretaries of Defense and the Army; the Director, Office of Management and Budget; and appropriate congressional committees. The report also will be available at no charge on GAO’s Web site at http://www.gao.gov. Should you or your staffs have any questions about this report, please contact me at (202) 512-4523 or leporeb@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix V. To describe the analyses that the Army conducted to make determinations regarding which brigade combat teams (BCTs) would be inactivated or reorganized and at which U.S. installations, we identified and examined regulations, briefings and other relevant documents outlining the Army’s decision process and interviewed knowledgeable Army officials about the Army’s decision process and the key factors that were considered. Specifically, we reviewed the Army stationing regulation, Army Regulation 5-10 on Stationing—the document that establishes policies, procedures, and responsibilities for Army stationing actions. We also examined documents related to the Army’s environmental analysis, such as environmental regulations and the Army’s programmatic environmental assessment of 21 installations, and discussed this analysis with Army officials. Further, we examined documentation related to the Army’s military value analysis model, such as briefings and reports on the model, and interviewed Army officials to determine how the model was used to inform the recent stationing decision. In addition, we reviewed documents and briefings related to the development and assessment of the stationing options the Army considered as part of its decision process and interviewed Army officials to discuss how the stationing options were analyzed in light of the identified stationing factors and the process used to screen the stationing options prior to the final Army decision. We also met Army officials to discuss the methodology used to develop the military construction cost estimates that were considered as part of the stationing options and how other costs were considered as part of the Army’s analysis, but did not review these cost estimates or analyses as they did not materially affect our findings, recommendations, or conclusions. Further, we examined briefings, orders, and other documentation related to the listening sessions held at Army installations, such as summaries from the meetings and information provided by installations, to obtain information on community input used to inform these decisions and spoke with Army officials regarding the extent to which they considered such information as part of the stationing decision. To evaluate the extent to which the Army has established guidance and processes related to the use of the military value analysis model as a part of its stationing decisions, including the recent BCT decision, we examined the Army’s stationing guidance, stationing report to Congress, and reports and briefings that documented previous uses of the model. We examined and compared prior versions of the model and the current model used for the recent BCT decision to determine if and how the process for conducting the model, including the key elements of the model, such as the attributes used within the model, and the review and approval process for key elements of the model, has changed over time. We reviewed documentation related to key elements of the current model, such as the attributes that were included, weighting of the attributes, and scoring of the attributes for each installation. We interviewed knowledgeable officials at the Center for Army Analysis and Army Force Management to discuss the development of the model, including how attributes were identified, the factors that determine if an attribute would be included in the model, and how weights for the attributes were determined. Additionally, we interviewed knowledgeable officials about how the key aspects of the model are reviewed, updated and if relevant, approved, for each use of the model. We obtained and reviewed the spreadsheet-based military value analysis model to examine the technical components of the tool and how the tool is used to calculate the scores. This included some general checks for basic internal consistency and coherence of key elements in the tool as well as a general check of the consistency of the tool with key documents, such as briefings and reports related to the model. We also examined documents related to collecting and validating the data used in the model, and interviewed subject matter experts from various Army organizations who provided the data about their role in developing the attributes’ definition and their methods and processes used to collect and validate the data, but we did not validate the data itself. During the course of our review, we interviewed officials from the following Army organizations: Deputy Chief of Staff for Manpower and Personnel G-1 Deputy Chief of Staff for Operations G-3/5/7 (Force Management, Training Support) Deputy Chief of Staff for Logistics G-4 (Strategic Mobility Division and Surface Deployment and Distribution Command Transportation Engineering Agency) Chief Information Officer G-6 (Installation Infrastructure Division) Deputy Chief of Staff for Financial Management G-8 (Program Analysis and Evaluation) Office of the Assistant Secretary of the Army for Installations, Energy, and Environment Office of the Assistant Chief of Staff for Installation Management U.S. Army Installation Management Command U.S. Army Environmental Command Center for Army Analysis Office of the Surgeon General/U.S. Army Medical Command U.S. Army Aeronautical Services Agency We conducted this performance audit from April 2013 through December 2013 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. On June 25, 2013, the Army announced that it will be inactivating one brigade combat team (BCT) from each of 10 different U.S. installations. An additional 5 installations were considered as part of the Army’s decision process, but these installations will not have a BCT inactivated at this time. Table 2 shows the BCT inactivations by installation as well as changes to the number of BCTs and projected population as a result of these inactivations and other force structure changes. Table 3 identifies the 16 attributes used in the Army’s military value analysis model that supported the Army’s brigade combat team (BCT) inactivation decision and the definitions of each attribute. In addition to the contact named above, Mark J. Wielgoszynski, Assistant Director; Bonita P. Anderson; David Dornisch; Kasea Hamar; Michael Shaughnessy; Erik Wilkins-McKee; and Weifei Zheng made key contributions to this report. Defense Headquarters: DOD Needs to Reassess Options for Permanent Location of U.S. Africa Command, GAO-13-646. (Washington, D.C.: September 9, 2013). Military Bases: DOD Has Processes to Comply with Statutory Requirements for Closing or Realigning Installations, GAO-13-645. (Washington, D.C.: June 27, 2013). Defense Infrastructure: Communities Need Additional Guidance and Information to Improve Their Ability to Adjust to DOD Installation Closure or Growth, GAO-13-436. (Washington, D.C.: May 14, 2013). Defense Infrastructure: Improved Guidance Needed for Estimating Alternatively Financed Project Liabilities. GAO-13-337. (Washington, D.C.: April 18, 2013). Military Base Realignments and Closures: Updated Costs and Savings Estimates from BRAC 2005. GAO-12-709R. (Washington, D.C.: June 29, 2012). Defense Infrastructure: Opportunities Exist to Improve the Navy’s Basing Decision Process and DOD Oversight, GAO-10-482. (Washington, D.C.: May 10, 2010). Military Base Realignments and Closures: Observations Related to the 2005 Round. GAO-07-1203R. (Washington, D.C.: September 6, 2007). Military Bases: Observations on DOD’s 2005 Base Realignment and Closure Selection Process and Recommendations. GAO-05-905. (Washington, D.C.: July 18, 2005). Military Bases: Analysis of DOD’s 2005 Selection Process and Recommendations for Base Closures and Realignments. GAO-05-785. (Washington, D.C.: July 1, 2005).
As part of its plan to reduce its active duty force by 80,000 personnel by 2017, the Army will be inactivating 10 BCTs currently stationed in the United States and reorganizing the remaining BCTs. The Army conducted analyses of different stationing options, which included the use of its military value analysis model to compare installations based on their ability to support BCTs. GAO was asked to review the decision making process the Army used for its BCT stationing decision, including its military value analysis model. This report (1) describes the analyses the Army conducted to make its BCT decision and (2) evaluates the extent to which the Army has established guidance and processes related to the use of the military value analysis model as a part of its stationing decisions. GAO reviewed the Army's stationing guidance, current and previous versions of the military value analysis model, documents on the BCT decision, and spoke with cognizant officials. To make decisions regarding the installations at which to inactivate 10 Brigade Combat Teams (BCTs) and reorganize others, the Army conducted quantitative and qualitative analyses and obtained community input. Specifically, in 2012 the Army established a BCT Reorganization Operational Planning Team to assess factors such as strategic considerations, military construction costs, and environmental and socioeconomic impacts, among others, and develop stationing options for decision makers. The Army also considered other factors, or attributes--such as training ranges, geographic distribution, and proximity to embarkation points--in its military value analysis model. In addition, the Army conducted community input sessions at installations with 5,000 or more military and civilian personnel, including the 15 under consideration for inactivation of a BCT. Several Army officials said that the sessions were valuable and could serve as a tool for future stationing decisions. However, the Army's stationing regulation does not include guidance on obtaining community input beyond what may be required in the context of environmental analysis. An Army official said that he is developing proposed guidelines for when such input should be considered, but was uncertain how they will be incorporated into formal guidance. Effective stakeholder involvement includes actively soliciting ongoing stakeholder input and fostering communication between stakeholders and decision makers. Incorporating this type of communication with external stakeholders into its stationing guidance for future decisions could lead to potentially greater buy-in from local communities for Army stationing decisions. The Army expects to continue using its military value analysis model for major stationing decisions and has taken steps to validate the model, but has not established guidance and consistent formal processes related to its use, including when the model should be used or how it should be reviewed, updated, and approved. Standards for internal control state that control activities, such as established and consistent processes or policies, can help to ensure actions to mitigate risks are carried out. Army officials said that the model has generally been used for large-impact stationing decisions and may not be appropriate for minor decisions. However, the Army's stationing regulation does not discuss the model or provide guidance on the circumstances when the model should be used. Also, the Army has not established consistent processes for reviewing and updating attributes and attribute definitions within the model or for collecting and validating data, nor has it established guidance related to the level of input or approval required for changes to the model or how geographically distant training areas should be treated in the model. For instance, subject matter experts noted that the definitions of a couple of attributes should be updated or reviewed, but GAO found that there is no consistent process in place for addressing such issues. Army officials told GAO that the attributes and weighting of the attributes within the model may also change depending on the type of stationing decision, but there is no guidance on when revisions should be approved by Army leaders. Without consistent formal processes for updating and reviewing the model and data used, and guidance related to the level of approval required for changes to the model, the Army risks potential decline in the rigor and consistency of the model over time. GAO recommends the Army develop and implement guidance related to when community input should be obtained for stationing decisions, and related to the use of its military value analysis model, such as when it should be used, the level of approval required for changes to the model, and how certain training areas should be considered, as well as processes for updating and reviewing the model. The Army concurred with GAO’s recommendations and explained how they will be implemented.
To carry out its responsibility for the custody and care of federal offenders, BOP currently houses inmates across six geographic regions in 120 long-term federal institutions. The Central Office and regional offices provide administrative oversight and support to institutions, among other things. The management officials located at each institution, including wardens and associate wardens, provide overall direction and implement policies. Male long-term institutions include four security-level designations–– minimum, low, medium, and high––and female long-term institutions include three security designations––minimum, low, and high. The security-level designation of a facility depends on the level of security and staff supervision that the facility is able to provide, such as the presence of security towers; perimeter barriers; the type of inmate housing, including dormitory, cubicle, or cell-type housing; and inmate-to-staff ratio. Additionally, BOP designates some of its institutions as administrative institutions, which specifically serve inmates awaiting trial, or those with intensive medical or mental health conditions, regardless of the level of supervision these inmates require. As of June 2014, BOP owned and operated seven stand-alone minimum-security institutions, 30 low- security institutions, 47 medium-security institutions, 16 high-security institutions, 1 administrative maximum (ADX) institution that houses inmates requiring the highest levels of security, and 19 administrative institutions. Many of these facilities are colocated on BOP-operated complexes that also contain minimum-security camps, which are nonsecure facilities that generally house nonviolent, low-risk offenders that are not included in this count. For example, USP Yazoo City is located on the Yazoo City Complex, which also includes a medium- security FCI, a low-security FCI, and a minimum-security camp. BOP calculates the number of inmates a given institution is built to safely and securely house and defines this as its rated capacity. BOP establishes a rated capacity for each of the institutions that it owns and operates. In determining rated capacity, BOP considers occupancy and space requirements. According to BOP, rated capacity is the basis for measuring crowding and is essential to both managing the inmate population and BOP’s annual congressional budget justifications for resources. After an inmate receives his or her sentence, BOP initially designates that person to a particular institution based on (1) the level of security and supervision the inmate requires; (2) the level of security and staff supervision the institution is able to provide; (3) the inmate’s program needs, such as residential drug treatment or intensive medical care; (4) where the inmate resides at the time of sentencing; (5) the level of crowding in an institution; and (6) any additional security measures to ensure the protection of victims, witnesses, and the public. BOP communicates its schedule estimates related to activating new institutions to Congress through the annual budget process. BOP receives appropriated funds through two accounts—Buildings and Facilities (B&F) and Salaries and Expenses (S&E)—which BOP has divided into subaccounts, called decision units. The B&F account funds the construction of new institutions and the maintenance of existing institutions. Specifically, the B&F account has two subaccounts: (1) new construction and (2) modernization and repair. The B&F account includes no-year appropriations, which are available until expended. BOP’s B&F account’s modernization and repair subaccount funds are used to rehabilitate, modernize, and renovate buildings and associated systems, as well as repair or replace utilities or other critical infrastructure at BOP institutions. BOP’s B&F budget justification includes accompanying budget exhibits, which, among other things, provide estimated timelines for when new institutions will provide rated capacity. Broadly, the S&E account covers costs for staffing; inmate medical care, food, and programming; and utilities at existing institutions. Specifically, the S&E account has four subaccounts: (1) inmate care and programs, (2) institution security and administration, (3) contract confinement, and (4) management and administration. Generally, the S&E account includes 1- year appropriations, which are available for obligation only in the fiscal year for which they were appropriated. BOP receives congressionally directed funding for activation—the overall process by which BOP staffs and equips institutions and then populates BOP officials stated that, them with inmates—through its S&E account.generally, BOP does not start the activation process until it has received congressionally directed activation funding. Upon receipt of congressionally directed activation funds, BOP begins completing what, for the purposes of this report, we consider “preactivation” steps, which include completing renovations; hiring staff, such as wardens and executive staff to manage the institution; and ordering supplies and equipment. Preactivation also includes meeting with community members, recruiting and training new staff, and furnishing the new institution. When these steps are completed, the institution begins receiving inmates, and when this occurs, for the purposes of this report, we consider that institution to be partially activated. Once the institution houses inmates at its rated capacity, or the number of inmates BOP determines the institution can safely and securely house, we consider that institution to be fully activated. All of the institutions in our review are currently in the preactivation or partial activation phases of the activation process because they do not yet house the number of inmates they were designed to hold. See figure 1 for a description of the life cycle of those institutions. BOP’s Design and Construction Branch is responsible for, among other things, overseeing the construction of new institutions, and when construction of an institution is almost complete, it transfers responsibility to the regional office or the local institution, thereby formally transitioning from construction to the beginning of the activation process. Once this responsibility has been transferred, regional or local officials work with the construction contractor to ensure that all items covered by the construction contractor’s warranty, such as cooling and heating systems, are working properly prior to the warranty’s expiration. They also work to conduct some alterations, installations, and repairs, such as placing additional razor wire and upgrading security features, that must be completed before the institution can securely house inmates. BOP made similar repairs to four of the six institutions in our review. When the institution is ready to accept inmates, BOP issues an “Activation Memo” that specifies the criteria that inmates must meet in order to be housed in the new institution. Such criteria are based on the security level of the institution and the medical and mental health care services the institution was designed to provide. The criteria also generally include inmate characteristics that will allow for smooth transitions as the institution prepares for activation. For example, the Activation Memo may state that inmates should have histories of good conduct, no prior gang affiliation, and be generally healthy. Institutions that have inmates that meet those criteria can request that those inmates be transferred to the activating institution by submitting a formal request to the Designation and Sentence Computation Center, which officially approves the transfer. The Designation and Sentence Computation Center is also responsible for classifying inmate security levels and designating those inmates to specific institutions. From fiscal years 2005 through 2007, the President’s annual budget request included a moratorium on new institution construction in an effort to have BOP take greater advantage of public and private sector bed space, which operate under contract with BOP, to meet the need for greater capacity. As a result of that moratorium, BOP officials reported that they were reluctant to proceed with the construction of several BOP has six federal institutions institutions, as we previously reported.across the country currently in different phases of the activation process, as we discuss later in this report. See figure 2 for a description of each of those institutions. BOP is behind schedule in fully activating, or reaching rated capacity for, all six institutions in the activation process. This is due, in part, to challenges posed by the locations of the activating institutions. However, although the institutions’ locations posed challenges related to staffing, BOP is not effectively monitoring staffing challenges at individual institutions to ensure that they are staffed and, in turn, fully activated, within estimated time frames. Further, BOP does not have a policy in place to guide the activation process, or an associated schedule that meets best practices, which limits BOP’s ability to accurately assess activation progress and ensure that the new institutions effectively reduce crowding as intended. All six institutions in the activation process have had schedule slippages due to challenges caused by their locations and delays in receiving congressionally directed activation funding. According to BOP officials, delays in receiving congressionally directed activation funding are outside BOP’s control and can exacerbate existing challenges with staffing or populating an institution with inmates. This type of delay generally occurs because of aspects of the appropriations process, including continuing resolutions, that have resulted in BOP receiving its final funding level and associated congressional direction late in the fiscal year. In addition, in some fiscal years, BOP does not receive congressionally directed activation funding for specific institutions. Generally, either the annual appropriations act or the conference report accompanying BOP’s annual appropriations act directs BOP to use S&E appropriations for activation activities at particular institutions. BOP generally follows directives contained in the conference report language even if not incorporated into the appropriations act and therefore, in practice, does not activate institutions without congressional direction. Figure 3 illustrates the fiscal year in which BOP initially expected each institution to be fully activated, the subsequent revisions to that estimate, and the reasons for delay. Appendix I provides additional details on how delays in congressionally directed activation funding have resulted in schedule slippages for each of the institutions in the activation process. On our site visits to these institutions, we found that the locations of these institutions posed challenges related to staffing institutions and populating them with inmates within schedule estimates. For example, officials from FCI Aliceville stated that the institution’s location made it challenging to hire staff up to authorized staffing levels during activation because of the low locality pay in Alabama compared with pay in other states. According to officials, it has been difficult to find local applicants who could pass BOP’s preemployment background check because prospective hires often had disqualifying levels of debt, even though they met the qualifications based on skill.Southeastern Regional Office stated that staffing two of the institutions within its region—FCI Aliceville and USP Yazoo City—has been challenging because of their rural locations. Further, officials from USP Yazoo City told us that it was particularly challenging to hire medical staff because of the institution’s location and low pay in that area. Similarly, officials from the Moreover, we found on our site visits that the locations of some of these institutions also posed challenges related to populating them with inmates. In particular, these institutions generally accept inmates who are healthy, because the institutions are not located close to hospitals that can provide care for inmates with chronic or serious conditions requiring For example, officials frequent visits, such as those with liver disease. from FCI Berlin told us that they had difficulty populating the institution with inmates because it could provide care only for generally healthy inmates, given its distance from major hospitals.more than 2 hours away from the closest large hospital that can provide care for inmates with serious health conditions. As a result, according to officials from FCI Berlin, they originally planned to transfer to FCI Berlin only those inmates who are in overall stable health—those with a Care Level 1 designation—as doing so would minimize the need to regularly In fact, FCI Berlin is transport inmates to faraway hospitals for necessary medical care.However, there were not enough Care Level 1 inmates who also met the other criteria FCI Berlin specified, such as inmate security level, so the institution expanded its health care designation to also accept stable Care Level 2 inmates, who need more medical care than Care Level 1 inmates. BOP officials acknowledged that distance from major hospitals is a primary factor in determining the care level for institutions, such as FCI Berlin. The conference report, H.R. Conf. Rep. No. 107-278, at 83 (2001), accompanying the Departments of Commerce, Justice, and State, the Judiciary, and Related Agencies Appropriations Act, 2002, Pub. L. No. 107-77, 115 Stat. 748 (Nov. 28, 2001), stated that “he conference agreement provides that of the $650,047,000 provided for increases as outlined below, $5,000,000 shall be for partial site and planning of the USP Northeast/Mid- Atlantic facility, to be located in Berlin, New Hampshire.” environmental impact statements.congressional budget justification is used to convey BOP’s funding and housing needs, and these justifications would allow DOJ and BOP officials the opportunity to convey to Congress any potential challenges BOP may be facing or anticipating with respect to selecting certain sites for new institutions. In contrast, BOP’s annual According to BOP officials, when BOP is congressionally directed to investigate a particular location, BOP generally considers this as a direction to focus its efforts on constructing an institution in that specific location. However, officials from both BOP’s Central Office and activating institutions acknowledged that the locations of these newly constructed institutions make activation more difficult. Standards for Internal Control in the Federal Government states that management should ensure that there are adequate means of communicating with, and obtaining information from, external stakeholders, such as Congress, that may have a significant impact on the agency achieving its goals.activations limit BOP’s ability to reduce crowding as BOP intended, DOJ and BOP would be better positioned for future activations and could more effectively manage activation costs and timelines by using BOP’s annual congressional budget justification to communicate to Congress the factors that might delay future activations, such as challenges hiring staff and placing inmates, associated with the locations of new institutions. In turn, congressional decision makers could be better positioned to take such factors into account when directing BOP where to build new institutions. BOP’s Central Office reviews aggregated data of staffing system-wide, but it does not monitor or analyze staffing data by individual institutions, such as those located on a complex, or track how long it takes individual activating institutions to hire staff. When we analyzed OPM’s EHRI data on BOP staffing, we found that none of the institutions in the activation process has a full complement of staff. Although the following institutions are partially activated, FCI Aliceville is 63 percent staffed, FCI Berlin is 67 percent staffed, and FCI Mendota is 73 percent staffed. Since these institutions are not fully staffed, they cannot be fully activated until they have a full complement of staff to effectively manage additional inmates. Standards for Internal Control in the Federal Government calls for agencies to identify, capture, and distribute operational data to determine whether an agency is meeting its goals and effectively using resources. Because BOP’s Central Office does not review staffing data at the institution level, BOP’s human resources officials from the Central Office could not tell us whether all of the specific institutions in our review, specifically those located on complexes, faced obstacles recruiting and retaining staff. Likewise, BOP officials were not positioned to discuss the impact of potential staffing challenges on activation. Such staffing challenges could affect how quickly BOP can reduce crowding across the system—one of BOP’s key strategic goals. GAO/AIMD-00-21.3.1. where institutions hired staff at higher rates in each consecutive year, authorized positions remained unfilled. In addition, for example, not all of the employees FCI Mendota hired from fiscal years 2010 through 2013 remained on board to work at the institution. Additionally, our analysis of OPM data indicates that each institution undergoing activation has had staff sever employment with BOP or transfer to other institutions, compounding existing staffing issues. For instance, each year more staff have separated from employment at FCI Mendota than in the prior year, for a total of 40 employees—about 21 percent of its total hires—from fiscal years 2010 through 2013. Additionally, according to data provided by BOP from its personnel database for fiscal year 2014—through July— an additional 11 employees separated from FCI Mendota. See appendix II for our analysis of BOP’s human resources data. OPM defines the minimum rate as the minimum wage an employee may be paid based on the employee’s scheduled rate of pay. authorized staffing levels or to develop effective, tailored strategies to mitigate those challenges. BOP institutions in the activation process rely on the expertise of staff and two templates that the Central Office developed to guide the activation process: (1) the activation handbook, which identifies roles and responsibilities for BOP officials during the activation process, and (2) the staffing timeline, which provides a general sequence of hiring events that BOP staff are to follow prior to the institution receiving inmates. Activating institutions complete these templates, and may modify the documents at their discretion; thus, they differ in style, scope, and substance depending on the institution completing them. However, the two templates do not constitute a documented, bureau-wide policy related to activation, nor do they include an activation schedule that incorporates best practices, such as accounting for factors that might delay activation, including delays in receiving congressionally directed activation funding or challenges with staffing institutions. Activating institutions in our review did not have a detailed policy or schedule to guide the activation process. Instead, they relied heavily on relationships with individuals who had experience with activations. For example, officials at half of the institutions told us that they relied on the experience of officials who had previously been involved with activations to help guide the process. Further, on our site visit to USP Yazoo City, officials told us that the primary challenge related to financial management during the activation process was ensuring that the managers of each of the departments in the institution, such as business administration or human resources, knew what was required for activation. The business administrator sought out assistance from the regional office and visited another institution that was activated prior to USP Yazoo City to review records and itemized lists of what officials had determined they needed for that activation. Further, BOP relied on the experience of a now-retired official serving as the Activation Coordinator for the institutions in our review. BOP relied on that official to initiate the activation process by ordering necessary supplies and equipment. Officials we interviewed from the regional offices noted that without the Activation Coordinator’s guidance, they would not have known what supplies and equipment were needed for activation. Standards for Internal Control in the Federal Government states that policies and procedures are needed to enforce management’s directives, and that significant events need to be documented in policies and procedures.According to BOP documents and officials, BOP does not plan to construct, and subsequently activate, new institutions for the foreseeable future. In addition, officials contend that BOP has vast institutional knowledge to guide the activation process, rendering a formal written policy unnecessary. Nevertheless, BOP will likely face difficulty during future activations since about 32 percent of BOP staff will be retirement eligible within 5 fiscal years—a proportion that is similar to the retirement rate for the federal workforce as a whole. Such staffing attrition underscores the need to have documented policies in place to ensure that future staff can conduct and complete activations effectively and within cost and schedule commitments. According to our analysis of the details included in each institution’s version of the activation handbook template and the staffing timeline template, we determined that BOP’s schedules do not meet all 10 best practices required for a schedule to be reliable. Specifically, we assessed each of the six institutions’ versions of the activation handbook and staffing timeline templates against four characteristics of a reliable schedule associated with the 10 best practices.collectively, the six BOP institutions minimally met two and did not meet two of these four characteristics. From fiscal year 2010 through March 2014, BOP obligated about $25 million from its S&E account to maintain partially activated institutions while it waited for congressionally directed activation funding through the S&E account. The amount of congressionally directed activation funding included in BOP’s annual spend plans reflects the amount the bureau planned to spend activating each of the individual institutions in our review from fiscal year 2010 through March 2014. However, we found that BOP obligated more than its planned amount for two of the three partially activated institutions (see table 2). Specifically, BOP obligated about $7.5 million and $17.7 million more than it planned to activate FCI Berlin and FCI Mendota, respectively, for a collective total of about $25.2 million more than it estimated in its spend plan. Additionally, we found that BOP’s obligations for activating those three institutions are similar to what BOP has spent in the past when activating new institutions. Specifically, when adjusting for inflation, BOP spent, on average, about $41,100 per bed for those institutions, and spent, on average, about $40,600 per bed on the 21 institutions activated from 1994 through 2000. To finance the additional $25 million of activation-related activities, BOP officials told us they used S&E funds that are generally used to fund the operation of all BOP institutions. Specifically, BOP officials told us that BOP spends between $1 million and $4 million each year to maintain a newly constructed or acquired institution while waiting for the congressionally directed activation funding needed to complete the activation process. This additional spending supports, among other things, salaries of a core group of staff and the cost of utilities to maintain institutions and keep them secure prior to receiving congressionally directed activation funding. According to BOP officials, when BOP is waiting for congressionally directed activation funding for a given institution, BOP hires a facility manager to ensure that the building is operating appropriately and covers costs associated with utilities, such as electricity and water. For example, to maintain Administrative USP Thomson while waiting for congressionally directed activation funding, BOP obligated $1.3 million in fiscal year 2013, and, as of March 2014, BOP has obligated about $440,700. Those costs included relocation expenses, uniforms, and training for the maintenance staff; utilities; and mechanical services for the institution, among other things. In addition, BOP officials told us that BOP spent about $150,000 over 2 fiscal years to fund a state of Illinois employee to maintain the institution while waiting for BOP staff to be hired. Further, BOP spent about $10 million in utility and personnel costs to maintain FCI Berlin for an additional 2 fiscal years while BOP waited for congressionally directed activation funding to begin activation. According to BOP officials, that additional funding covered the salary expenses for 16 employees that the agency hired to maintain the institution. BOP officials told us they had planned on paying for these salaries with its requested activation funding, but because the activation funding was delayed, BOP used other available funds from its S&E account to keep the staff in place. In addition to maintaining empty institutions while waiting for congressionally directed activation funding, BOP has also obligated funds from its B&F account to provide a range of modifications to new institutions after construction is complete. According to BOP data, BOP spent a total of about $1.2 million on alterations, installations, and repairs at four of the six institutions in our review, with costs per institution ranging from about $130,000 to $458,000. For example, FCI Berlin spent $150,000 repairing roofs that could not handle the heavy amounts of snowfall and constructing overhangs at exterior doorways to provide protection from the weather.are generally related to maintaining empty institutions because of a lag in receiving activation funds. As discussed earlier in this report, communicating the potential challenges that certain locations may pose for activation could help BOP minimize the amount of renovation or staff relocation expenses it ultimately spends while awaiting congressionally directed activation funding. BOP plans to activate and fully populate all six institutions with inmates by fiscal year 2016, which would add 7,852 beds to BOP’s overall capacity.If BOP achieves this goal, these institutions will reduce the overall crowding rate from almost 42 percent to 34 percent. According to our estimates using BOP’s projections of the future inmate population, we found that crowding reductions will vary for inmates depending on their gender and security level. To determine how these institutions will affect system-wide crowding, we compared crowding rates with and without the addition of new beds that BOP anticipates the six institutions will provide by fiscal year 2016. On the basis of that comparison, we estimate that BOP will have spent almost $200 million per percentage point decrease in the overall crowding rate—or about $1.6 billion since fiscal year 2003—on constructing, acquiring, and activating the institutions in our review. For specific details on these institutions’ impact on crowding, see table 3. Crowding by gender. The new institutions housing male inmates will add a total of 6,316 beds to total capacity once they are fully activated. The addition of these beds will reduce crowding among male institutions by about 7 percentage points, lowering the rate from about 41 percent to about 34 percent, assuming that the institutions will reach rated capacity by fiscal year 2016 as BOP intends. Similarly, the only new institution to add beds for female institutions, FCI Aliceville, will add a total of 1,536 beds to total capacity for institutions housing female inmates. The addition of these beds will reduce crowding among female institutions by about 21 percentage points, falling from about 45 percent to about 24 percent. Crowding by security level. The new institutions housing medium- security inmates will add a total of 3,456 beds, while the new institutions housing high-security inmates will add a total of 2,860 beds, once they are fully activated. Additionally, the new institution housing low-security female inmates will add 1,536 inmates. The addition of medium-security beds at FCI Berlin, FCI Hazelton, and FCI Mendota will reduce crowding at that security level by about 13 percentage points, lowering the crowding rate from about 55 percent to about 42 percent, assuming that the new institutions will reach rated capacity by fiscal year 2016 as BOP intends. Similarly, the addition of high-security beds at Administrative USP Thomson and USP Yazoo City will reduce crowding at that security level by about 26 percent, lowering the crowding rate from about 57 percent to 31 percent. Finally, once fully activated, FCI Aliceville will reduce crowding among low-security female institutions by about 56 percentage points, or a decrease from about 84 percent to about 28 percent. Because FCI Aliceville will add 1,536 beds to an overall capacity of 5,048 beds among medium-security institutions housing female inmates, these new beds will have a large effect on crowding at that security level. DOJ purchased the Thomson Correctional Center in an effort to reduce high-security crowding, and if the institution reaches rated capacity by fiscal year 2016, we estimate that the institution will reduce high-security crowding by about 16 percentage points. However, purchasing Thomson resulted in unplanned costs at the time of the purchase and will increase costs in the future. DOJ stated in its budget requests that acquiring the Thomson Correctional Center in Illinois would address high-security crowding and support BOP’s mission, as well as DOJ’s strategic goal to ensure the fair and efficient administration of justice. The state of Illinois constructed the institution in 2001 to address inmate crowding in state-operated institutions, but never populated the high-security portion of the institution with inmates because of the institution’s high operating costs, among other things. In December 2009, the President issued a memorandum to the Attorney General and Secretary of Defense directing DOJ and the Department of Defense to purchase and use the Thomson Correctional The memorandum stated that the acquisition of the institution Center.would facilitate the closure of detention facilities at Guantanamo Bay Naval Base and reduce BOP’s shortage of high-security, maximum- custody bed space. However, legislation limited or prohibited the use of federal funds to transfer Guantanamo Bay detainees into the United States, and DOJ officials stated that the department was committed to fully adhering to those prohibitions. DOJ pursued the purchase of the institution to provide beds for high-security inmates who, according to DOJ officials, were not previously detained at Guantanamo Bay. DOJ requested funding from Congress to purchase and activate the Thomson Correctional Center in its annual budget requests for fiscal years 2011 and 2012. Congress did not provide funding for either request. While DOJ continued to wait for specific funding for the acquisition of the Thomson Correctional Center, DOJ and the state of Illinois assessed the purchase of the institution. During this time, DOJ obtained two appraisals indicating an average value of $165 million, and the state of Illinois conducted three appraisals. On the basis of the average value from the two appraisals conducted on behalf of the federal government, DOJ and the state of Illinois agreed on a price of $165 million (the compensation for the taking of the property by the federal government as described below). Congress did not provide specific funding to finance the purchase of the Thomson Correctional Center. Therefore, in July 2012, DOJ notified the Senate and House Committees on Appropriations of its intention to allocate $165 million in existing funding to purchase the institution. DOJ and BOP allocated funds from three separate funding sources. Specifically, BOP reprogrammed $5 million in its B&F appropriation from the modernization and repair subaccount into the B&F new construction subaccount, and transferred $9 million from BOP’s S&E account to BOP’s B&F account. In addition, DOJ transferred $151 million from DOJ’s Assets Forfeiture Fund Super Surplus to BOP’s B&F account. According to DOJ and BOP budget officials, the absence of specific funding for purchasing the Thomson Correctional Center, and restrictions on transferring and reprogramming applicable to BOP’s annual appropriations affected DOJ’s decisions to use these multiple funding In September 2012, the Director of BOP filed in the United streams.States District Court for the Northern District of Illinois a “declaration of taking” to acquire the Thomson Correctional Center and deposited the $165 million with the court as compensation. In October 2012, DOJ acquired the Thomson Correctional Center from the state of Illinois and subsequently renamed the institution Administrative USP Thomson. Once it reaches rated capacity, according to our analysis of inmate population data, Administrative USP Thomson will help address crowding at the high-security level by about 16 percent for male inmates, which is similar to the decrease in crowding rates that DOJ asserted in the business cases submitted to Congress as part of the annual budget process. The expenses associated with the initial purchase price shifted funds away from BOP repairs and program services, and the continued expenses of operating and maintaining the institution will incur future costs. BOP officials acknowledged that the purchase of Administrative USP Thomson posed costs at the time of the purchase and will pose costs in the future, but said that the benefits that Administrative USP Thomson provides—particularly high-security bed space—far outweigh the costs associated with the institution. The activation of any new institution, including Administrative USP Thomson, will increase BOP’s operational costs. Further, according to DOJ officials, the purchase of Administrative USP Thomson provided bed space at a lower cost than constructing a new institution. The $14 million DOJ used from BOP’s S&E and B&F accounts toward the purchase of Administrative USP Thomson came from accounts that BOP uses for the operation and maintenance of the federal prison system. Specifically, BOP uses funds from the B&F account’s modernization and repair subaccount to address outstanding, unfunded modernization and repair items system-wide, otherwise known as BOP’s maintenance and repair backlog. BOP maintains a list of these backlogged items that are in excess of $300,000, such as replacing roofs and boilers, that it considers important in order to rehabilitate, modernize, and otherwise repair physical structures and systems needed to maintain safety and security at its institutions and avoid costlier repairs in the future. BOP’s list from 2012 totaled approximately $346 million and included 150 items, the most expensive of which was about $16 million. Since Thomson is a 13-year- old facility and will incur repair costs as it ages, it may add new items to BOP’s existing list of unfunded maintenance and repair priorities. BOP’s S&E account funds staffing, inmate medical care, food, utilities, and services at various BOP institutions, such as inmate educational or vocational programs, among other things. In addition, during the nearly 2 years BOP has owned Administrative USP Thomson, it has spent approximately $1.8 million from the S&E account to maintain the empty facility while waiting for activation funding. BOP estimates that once Administrative USP Thomson is fully activated, it will cost $160 million each year to operate it as a high-security institution that primarily houses special management unit (SMU) inmates. Of that amount, $45 million is expected to cover food, medical, clothing, laundry, utilities, programming, and other related operating expenses, and $115 million is expected to cover staff salary and benefit costs. This $160 million estimated total annual operating cost is higher than that for all but two other BOP institutions, Federal Correctional Complex (FCC) Butner and FCC Coleman, based on fiscal year 2013 operational cost data. In BOP’s fiscal year 2014 congressional budget request, BOP estimated the need for 1,158 correctional positions at Administrative USP Thomson, 749 of which will be positions for correctional officers. SMUs require more staff than institutions with lower security levels because more staff are needed to provide constant inmate supervision. BOP officials told us that Administrative USP Thomson will require a large number of staff to operate because BOP plans to move some of the most dangerous SMU inmates housed elsewhere into Administrative USP Thomson. Administrative USP Thomson has a rated capacity of 2,100 beds—1,900 high-security SMU beds and 200 minimum-security beds at the onsite camp—and, according to BOP officials, the potential to use some of its high-security rated capacity to house up to 400 ADX inmates. Additionally, BOP officials told us that they estimate the institution eventually will be overcrowded by about 30 percent, given current and projected inmate population levels system-wide. Accordingly, they estimate that Administrative USP Thomson will ultimately house between 2,600 and 3,000 inmates. While this level of crowding would be lower than the current rate of 52 percent at BOP’s other high-security institutions, the daily cost per inmate at Administrative USP Thomson will still exceed the daily cost per inmate for SMU bed space at USP Lewisburg, which is the only other institution whose USP entirely houses SMU inmates. Specifically, we estimate that BOP will spend between $146.12 and $168.60 each day per inmate housed at Administrative USP Thomson and its camp. In comparison, we estimate that BOP spends about $100.46 daily per inmate for all inmates at USP Lewisburg (i.e., the SMU and camp inmates). This cost rises to $123.33 daily for only those inmates in its SMU. GAO, Bureau of Prisons: Improvements Needed in Bureau of Prisons’ Monitoring and Evaluation of Impact of Segregated Housing, GAO-13-429 (Washington, D.C.: May 1, 2013). activation, its uncertainty regarding the number and security level of the inmates it plans to house there underscores the challenge of activating institutions without a comprehensive policy and without a schedule anchored in best practices, as discussed earlier in the report. If BOP had a policy in place to guide its activation of new institutions and a schedule that could account for different scenarios, it would be better positioned to determine more precisely the number and type of inmate it plans to house at Administrative USP Thomson and help ensure that the institution is activated within schedule estimates. BOP would also be better positioned to make adjustments to account for changes in resources, as well as variations in cost, while keeping within established time frames. BOP’s six new institutions will reduce crowding system-wide, but doing so has cost more and taken longer than BOP initially estimated because of internal and external challenges, many of which, according to BOP officials, are outside of BOP’s control. Doing more to guide the aspects of the activation process over which BOP does have control could prevent similar schedule delays in future activations. In particular, by using the BOP annual budget justifications to clearly communicate to Congress the factors that might delay activation, like institution locations, DOJ could more effectively mitigate activation challenges and better meet the bureau’s needs. Further, ensuring that the Central Office is analyzing staffing data at individual institutions in the activation process and developing effective strategies to mitigate staffing challenges would help expedite the activation process. In addition, by developing and implementing a comprehensive activation policy that incorporates the knowledge of staff with experience activating institutions, as well as the four characteristics of scheduling best practices, BOP would be better positioned to ensure that future activations are implemented in accordance with realistic cost and schedule commitments. While the success of new institution activations relies heavily on congressional direction regarding activation funding, BOP ultimately is responsible for the taxpayer dollars it spends on construction, acquisition, and activation of new institutions. Taking action to address challenges that BOP can control will help mitigate obstacles in ongoing and future activation of new institutions. To ensure that the challenges that BOP faces activating new institutions are clearly conveyed to decision makers, we recommend that, in future activations, the Attorney General use DOJ’s annual congressional budget justification for BOP to communicate to Congress factors that might delay activations, such as challenges hiring staff and placing inmates associated with the locations of new institutions. To better address obstacles that occur during the activation process and to help ensure that institutions are activated within estimated timeframes, including those institutions that do not currently have inmates, such as Administrative USP Thomson and USP Yazoo City, we recommend that the Director of the Bureau of Prisons take the following three actions: direct the Central Office to analyze staffing data at individual institutions in the activation process to assess their progress toward reaching authorized staffing levels and use that assessment to develop effective, tailored strategies to mitigate those challenges; develop and implement a comprehensive activation policy that incorporates the knowledge of staff with experience activating institutions; and develop and implement an activation schedule that incorporates the four characteristics of scheduling best practices. We provided a draft of this report to DOJ for review and comment. DOJ provided written comments, which are reprinted in appendix IV, and technical comments, which we incorporated as appropriate. DOJ agreed with all four of the recommendations and outlined steps to address them. If fully implemented, these actions will address the intent of our recommendations. With respect to the first recommendation, DOJ agreed to use the annual congressional budget justification to communicate with Congress any factors that might affect schedules in future activations. Regarding the second recommendation, DOJ agreed that BOP’s Central Office will analyze staffing data at individual institutions in the activation process to assess progress toward reaching authorized staffing levels and use that assessment to develop effective strategies to mitigate those challenges. DOJ stated that BOP’s regional offices will monitor staffing levels and report to the Central Office quarterly. BOP’s Central Office will provide oversight of staffing at activating institutions and will work to develop effective strategies, such as using recruitment incentives, when hiring challenges occur at those institutions. In response to the third recommendation, DOJ agreed to develop and implement a comprehensive activation policy that incorporates the knowledge of staff with experience activating institutions. DOJ stated that BOP will use knowledgeable staff to develop a new institution activation handbook. However, DOJ did not state what information would be included in this new institution activation handbook or how it would differ from the current activation handbook template. Ultimately, BOP should have a consistent policy that staff at institutions can use during the activation process to ensure that all future activations follow the same process. Finally, for the fourth recommendation, DOJ agreed to develop and implement an activation schedule that incorporates the four characteristics of scheduling best practices. DOJ stated that a template of this schedule will be included in the new institution activation handbook and will take into account the best practices outlined in GAO’s Schedule Assessment Guide. Further, DOJ commented that BOP cannot begin the activation process until Congress provides the necessary funding, which occurs over multiple years. DOJ stated that it cannot be held to activation estimates included in its annual budget requests when it does not receive such funding. As we note in the report, each of the institutions in the activation process has experienced schedule slippages due to delays in receiving congressionally directed activation funding, which is outside of BOP’s control, as well as challenges associated with institutions’ locations. We also note that institutions have experienced delays even after BOP has received multiple years of congressionally directed activation funding, which indicates that there is more that BOP could do to ensure that institutions are activated within schedule estimates. If BOP had a schedule that reflected best practices, it could revise its estimates, when warranted, for when activation would be completed. The ability to make such schedule revisions would allow BOP to adjust for the risks associated with funding delays and more accurately reflect the status of activation. Finally, DOJ commented that Administrative USP Thomson, like every newly activated prison, will increase BOP’s future operational costs. DOJ also stated that BOP’s primary cost driver is the number of inmates, not the number of prisons. We agree that activating any new institution will consequently increase BOP’s operational costs, and that this is not specific to Administrative USP Thomson. We also agree, and have previously reported, that the number of inmates entering the federal prison system is the primary driver of operational costs, rather than the number of institutions that BOP operates. However, we believe it is important to note that the acquisition or construction any new institution, including Administrative USP Thomson, will incur costs to operate and maintain in the future. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Attorney General of the United States, appropriate congressional committees, and other interested parties. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-9627 or maurerd@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix V. Each of the Federal Bureau of Prisons’ (BOP) institutions in the activation process has had schedule slippages due to delays in receiving congressionally directed activation funding. Such delays can exacerbate existing staffing challenges related to recruitment and retention, as discussed in appendix II. None of the six institutions is at rated capacity— the number of inmates that a given institution can safely and securely house. Federal Correctional Institution (FCI) Mendota. BOP estimated in fiscal year 2008 that FCI Mendota would be fully activated by fiscal year 2010, but received congressionally directed activation funding in fiscal year 2010. As a result, BOP subsequently revised this estimate to fully activate FCI Mendota in fiscal year 2011. After BOP received a second year of congressionally directed activation funding for FCI Mendota in fiscal year 2012, BOP determined the FCI would accept its first inmate in February 2012 and accepted the first inmate that As of August 2014, FCI Mendota has not reached rated same month. capacity—4 fiscal years after BOP’s initial estimates—with 85 percent of its 1,152 beds occupied. FCI Berlin. BOP estimated in fiscal year 2008 that FCI Berlin would be fully activated by fiscal year 2010. However, BOP subsequently revised this estimate to fully activate FCI Berlin in fiscal year 2011 because it had not received congressionally directed activation funding. After BOP received congressionally directed activation funding for FCI Berlin in fiscal year 2012, BOP determined that the FCI would accept its first inmate in January 2013 and accepted the first inmate 1 month later, in February. BOP received congressionally directed activation funding for FCI Berlin in fiscal year 2013. As of August 2014, FCI Berlin has not reached rated capacity—4 fiscal years after BOP’s initial estimate—with 83 percent of its total 1,152 beds occupied. A specific activation date is determined by the activating institution and regional office staff primarily based on funding, staffing, and training needs. FCI Aliceville. BOP estimated in fiscal year 2008 that FCI Aliceville would be fully activated by fiscal year 2011. However, BOP subsequently revised this estimate to fully activate FCI Aliceville in fiscal year 2012 because it did not receive congressionally directed activation funding. After BOP received congressionally directed activation funding for FCI Aliceville in fiscal year 2012, BOP determined that the FCI would accept the first inmate in June 2013 and accepted the first inmate 1 month later, in July. BOP received congressionally directed activation funding for FCI Aliceville in fiscal year 2013. As of August 2014, FCI Aliceville has not reached rated capacity—3 fiscal years after BOP’s initial estimate—with 77 percent of its total 1,536 beds occupied. FCI Hazelton. BOP estimated in fiscal year 2008 that FCI Hazelton would be fully activated by fiscal year 2012. However, BOP subsequently pushed back this estimate twice—once to fully activate FCI Hazelton in fiscal year 2013 and again to fully activate the institution in fiscal year 2014—because it had not received congressionally directed activation funding. After BOP received congressionally directed activation funding for FCI Hazelton in fiscal year 2013, it determined that the FCI would accept its first inmate in March 2014. BOP received congressionally directed activation funding for FCI Hazelton in fiscal year 2014. However, as of August 2014, the institution has not yet reached rated capacity—2 years after BOP’s initial estimate—with about 27 percent of its 1,152 beds occupied because of a leaking roof that had to be fixed before inmates were housed there. U.S. Penitentiary (USP) Yazoo City. BOP estimated in fiscal year 2009 that USP Yazoo City would be fully activated by fiscal year 2013. However, BOP subsequently revised this estimate to fully activate USP Yazoo City in fiscal year 2014 because it had not received congressionally directed activation funding. BOP received congressionally directed activation funding in fiscal years 2013 and 2014. According to BOP, it has not completed activation of the institution because it was waiting for a congressional response to its spend plan relating to fiscal year 2014 funding. BOP received responses from the Senate and House Committees on Appropriations in April and May 2014, respectively, supporting BOP’s plans to activate USP Yazoo City. As of August 2014, USP Yazoo City has not admitted any inmates. Administrative USP Thomson. BOP initially estimated that Administrative USP Thomson would be fully activated by fiscal year 2011. However, BOP subsequently revised this estimate to fully activate Administrative USP Thomson in fiscal year 2014 because it had not received congressionally directed activation funding in fiscal year 2012 or 2013. BOP received conflicting congressional direction regarding activation funding for Administrative USP Thomson in the committee reports corresponding to the DOJ/BOP fiscal year 2014 annual appropriations act. BOP subsequently included funding to activate that institution in its fiscal year 2014 spend plan, which was submitted to the Committees on Appropriations. BOP received responses from the Senate and House Committees on Appropriations in April and May 2014, respectively. However, those responses also included conflicting direction regarding activation funding for Administrative USP Thomson. As of August 2014, Administrative USP Thomson has not admitted any inmates. We analyzed data from the Office of Personnel Management’s (OPM) human resources database, Enterprise Human Resources Integration (EHRI) Statistical Data Mart, on staffing levels at the three institutions in our review that are partially activated and found that each faced obstacles hiring staff up to authorized levels (see table 4). Specifically, although FCI Aliceville, FCI Berlin, and FCI Mendota are all partially activated, none has a full complement of staff, as demonstrated by the number of employees on board each fiscal year. For example, FCI Berlin is authorized to fill 378 positions and has been staffing the institution since fiscal year 2010, even though BOP first received activation funds for that institution in fiscal year 2012, but it was only 67 percent staffed at the end of fiscal year 2013. Furthermore, our analysis of OPM data demonstrates that two of the three partially activated institutions have experienced increases in the number of staff they hire each fiscal year through fiscal year 2013, when institutions hired greater numbers of staff in each consecutive year although authorized positions remained unfilled (see table 5). As a result, these staffing challenges have implications for activating institutions because, as BOP officials told us, being able to receive additional inmates and thereby reach rated capacity relies, in part, on having enough staff to provide adequate security for those inmates. In addition to facing challenges recruiting qualified applicants, officials from two of the six institutions we visited reported that they faced challenges in retaining both new hires and experienced BOP staff that transferred from other BOP institutions. Officials said that BOP staff transfer to activating institutions to gain experience before receiving promotions elsewhere, so some staff do not stay in those new institutions over time. Further, officials from all of the institutions we visited noted that certain positions, such as those for medical staff, are particularly challenging to fill because, among other things, it is difficult to provide competitive salaries for those positions compared with what those staff could make in the private sector. Officials from BOP’s Central Office also noted that recruiting medical staff, such as doctors and nurses, was a challenge system-wide, not just within the institutions undergoing activation. In fact, one of BOP’s objectives in its strategic plan is to attract and retain competent health care professionals using a range of strategies, including recruitment and retention bonuses. Similarly, our analysis of OPM data demonstrates that partially activated institutions have had staff sever employment with BOP (see table 6). For instance, each year more staff have separated from employment at FCI Mendota than in the prior year, for a total of 40 employees—about 21 percent of its total hires—from fiscal years 2010 through 2013. Additionally, according to data provided by BOP from its personnel database for fiscal year 2014—through July—an additional 11 employees separated from FCI Mendota. We determined that BOP’s schedules for activating the six institutions in our review are not reliable based on our assessment of whether those schedules met best practices as outlined in our Schedule Assessment Guide. In May 2012, we issued GAO’s Schedule Assessment Guide to provide guidance to auditors in evaluating government programs. According to that guide, the success of a program depends, in part, on having an integrated and reliable master schedule that defines when and how long work will occur and how each activity is related to the others. A schedule is necessary for government programs for many reasons. The program schedule provides not only a road map for systematic project execution, but also the means by which to gauge progress, identify and resolve potential problems, and promote accountability at all levels of the program. A schedule provides a time sequence for the duration of a program’s activities and helps those involved understand both the dates for major milestones and the activities that drive the schedule. A program schedule is also a way to develop a budget that incorporates the time it will take to complete phases of the project. Moreover, the schedule is an essential basis for managing trade-offs among cost, schedule, and scope. Among other things, scheduling allows program management to decide between possible sequences of activities, determine the flexibility of the schedule according to available resources, predict the consequences of managerial action or inaction on events, and allocate contingency plans to mitigate risks. Moreover, an integrated and reliable schedule can show when major events are expected to occur as well as the completion dates for all activities leading up to them, which can help determine if the program’s parameters are realistic and achievable. Our research has identified 10 best practices associated with effective schedule estimating, which can be collapsed into four general characteristics: well constructed, and credible. After reviewing documentation BOP submitted for its activation schedule estimates and conducting interviews with BOP officials involved in activations, we determined that the documents used as schedules by the six activating institutions are not reliable. Collectively, the six BOP institutions minimally met two of the four characteristics of a reliable schedule and did not meet the remaining two characteristics, as summarized in table 7. To arrive at this determination, we examined the extent to which each of BOP’s six activating institutions adhered to each of the 10 best practices. We then assigned a corresponding score. We took the average score for all six institutions, by best practice, and collapsed the 10 best practices into the four characteristics to get an average that reflected an overall assessment by schedule characteristic. As summarized above, BOP’s six activating institutions minimally met two of the four characteristics of a reliable schedule and did not meet the remaining two characteristics. Each of the sections below provides greater detail about where BOP’s practices were deficient. Comprehensive. We reviewed the activation handbooks and staffing timeline documents that each of the six institutions completed and found that they minimally met the requirements for a comprehensive schedule, as illustrated by table 8. To guide activation, BOP’s Central Office provides each activating institution with a standard activation handbook template and general staffing timeline. With respect to the activation handbook template provided to activating institutions, this document contains some elements of a work breakdown structure, which is an important feature in a comprehensive schedule. In particular, the activation handbook template lists necessary tasks, organized by responsible departments or contractors. However, the activation handbook template provides the tasks, or activities, in the list in a general way, but none of the six institutions tailored the tasks to meet their specific requirements, which limits the bureau’s ability to oversee all activation activities. Similarly, the activation handbook template does not fully include all work associated with each deliverable and does not identify the specific personnel within the department responsible for each activity—and none of the six institutions modified the template to provide this information. For example, the activation handbook template specifies that the Correctional Services department is responsible for coordinating with on-site project staff to evaluate all entrances, doors, locks, intercoms, cameras, and so forth. However, it does not provide additional detail about what is required as part of these evaluations, how these requirements might differ by institution, which specific staff should be executing or overseeing these activities, or what the costs associated with this activity might be. Two of the six institutions adapted the activation handbook template to provide detail on some resource costs associated with activation. In the case of FCI Berlin, that institution’s activation handbook included details on the associated costs by department, such as Correctional Services, Food Services, or Medical Services, and the materials those departments would need to provide services to inmates, such as vaccines. However, FCI Berlin’s activation handbook did not provide sufficient information about why the specific resources would be needed to keep the activation on schedule and the impact of not having the resources. Finally, none of the six institutions included any reference to project duration in its respective modified activation handbook. For example, the activation handbook template provided by BOP to activating institutions states that the Case Management Coordinator is responsible for establishing transfer arrangements for inmates and coordinating assignments for the transfers’ work areas, but it does not provide an indication of how long the activity may take to complete. Without information about the estimated length of time required to complete each activity, management cannot accurately identify the staffing resources required to complete it, assess the progress of the activation process, or establish realistic dates for institution activation. With respect to the general staffing timeline template BOP’s Central Office provides, this document roughly identifies when particular staff resources are needed based on the anticipated activation date. For example, the general staffing timeline template specifies that 7 months prior to activation, the institution should hire the Warden, Executive Assistant, and Secretary within the Executive Staff department. However, the general staffing timeline template is based on the anticipated activation date rather than based on the activities each of these individuals would be doing—and none of the six institutions made modifications to elaborate on these activities. According to best practices for comprehensive schedules, resources, such as staff, should be assigned to particular activities in order to facilitate completion of these activities. Because of these deficiencies, the information contained in each of the six institutions’ activation handbooks and staffing timelines does not assist management in forecasting whether activities will be completed as scheduled or as budgeted. Further, these documents do not allow insight into the allocation of resources, increasing the likelihood that the activation process will not be completed as anticipated and limiting BOP’s ability to ensure accountability for the total scope of work. Controlled. The activation handbooks and staffing timeline templates that each of the six institutions modified minimally met the requirements for a controlled schedule, as illustrated in table 9. Two of the activating institutions provided versions of BOP’s activation handbook and staffing timeline templates that contained some indication that information on key dates and activities was updated at some point in time by the activating institutions. However, those activation handbook and staffing timeline templates modified by each institution did not indicate whether they had been updated at regular intervals, or that they reflected the actual status of the respective activations. For example, the activation handbook and staffing timeline templates modified by officials at FCI Aliceville that were used to guide that activation included handwritten dates for when specific steps in the activation process were expected to be completed, but these did not appear to be updated systematically or include an indication of when or if those activities had been completed. Further, none of the activating institutions’ activation handbooks or staffing timelines included the status of key milestone dates, such as whether specific activities had been completed, or when the activation should be completed. In addition, none of the activating institutions used activation handbooks or staffing timelines that described the critical risks that the institution faced in meeting its goals for activation or contingencies if those risks were realized. As BOP officials have noted, meeting scheduled dates for activation is often dependent on receiving specific activation funding as planned, and, according to best practices for a controlled schedule, such risks should be documented to ensure reliability. Without regularly updating the schedule based on the current status of the activation at each respective institution, BOP is limited in its ability to monitor activation progress or make decisions on how to mitigate risk or allocate resources for activating institutions. Well constructed. The activation handbooks and staffing timeline documents that each of the six institutions modified did not meet the requirements for a well-constructed schedule, as illustrated in table 10. The activation handbook and staffing timeline templates modified by activating institutions did not always provide specific information regarding start and finish dates, durations, or sequencing of activation- related activities. For example, BOP’s general staffing timeline template identifies the sequence in which staff should be hired, which is an important feature of a well-constructed schedule, but there is no sequencing of the activities listed in the activation handbook template. For example, the general staffing timeline specifies that within the Facilities department, the Communication Technician should be hired 7 months prior to activation, while the Facility Manager and Electrician and others should be hired 6 months prior to activation, but it does not provide contingencies if predecessor positions cannot be filled “on time” or describe the effects, if any, of filling positions out of sequence. Aside from staffing-related issues, the activation handbook template does not contain information on the order in which any of the activation tasks should occur. As a result, BOP does not have insight about the interdependencies between activities or the way in which early delays in some activities could affect activities later on as well as the overall activation completion date. Additionally, without identifying linkages between activities, BOP does not know the critical path of the activation process—that is, which activities can or cannot be delayed if the overall schedule is to be met. This prevents the agency from providing Congress with reliable timeline estimates or anticipated activation dates. Credible. The activation handbook and staffing timeline templates that each of the six institutions modified did not meet the requirements for a credible schedule, as illustrated in table 11. With respect to the best practice of vertically and horizontally aligning activities, two institutions adapted BOP’s activation handbook template to insert targeted due dates for selected activities; however, neither identified subset activities or linked overall activities in any specific order. Without this vertical alignment, BOP is not positioned to ensure that subactivities are on track for an overall activity’s completion. Similarly, without a clear sequencing of all activities, which is horizontal integration, BOP also is limited in the extent to which it can monitor overall progress toward activation. For risk assessment, neither the activation handbook template nor the staffing timeline contains a schedule risk analysis. Such an analysis typically focuses on key risks and how they affect the schedule’s activities. Without a schedule risk analysis, the likelihood of the project’s completion date or the activities or risks, such as funding details, most likely to delay activation cannot be determined. The activation handbooks and general staffing timeline templates that BOP has developed and institutions have modified during activation are positive steps toward a baseline schedule that could be used to guide future institution activations, because they provide some level of detail regarding the activities required for activation. However, the activation handbook and general staffing timeline contain only limited information consistent with best practices and therefore cannot be used by management to reliably measure, monitor, or report on performance. In addition to the contact listed above, Joy Booth, Assistant Director; Pedro Almoguera; Billy Commons; Adam Couvillion; Emily Gunn; and Jan Montgomery made key contributions to this report. Also contributing to this report were Lorraine Ettaro, Susan Hsu, Elizabeth Kowalewski, Karen Richey, Rebecca Shea, and William Varettoni.
The federal inmate population has increased over the last two decades, and as of July 2014, BOP was responsible for the custody and care of more than 216,000 inmates. To handle the projected growth of between 2,500 and 3,000 or more inmates per year from 2015 through 2020, BOP has spent about $1.3 billion constructing five new institutions and acquiring one in Thomson, Illinois. BOP is activating these institutions by staffing and equipping them and populating them with inmates. GAO was requested to review BOP's activation process of newly constructed and acquired institutions. GAO reviewed, among other things, (1) the extent to which BOP is activating institutions within estimated timeframes and has an activation policy or schedules that meet best practices, and (2) why DOJ purchased Thomson and how the purchase affected system wide costs. GAO reviewed BOP budget documents from fiscal years 2008 to 2015 and assessed schedules against GAO's Schedule Assessment Guide. GAO conducted site visits to the six institutions, interviewed BOP officials, and reviewed staffing data from fiscal years 2010 through 2013. The Department of Justice's (DOJ) Federal Bureau of Prisons (BOP) is behind schedule activating all six new institutions—the process by which it prepares them for inmates—and does not have a policy to guide activation or an activation schedule that reflects best practices. BOP is behind schedule, in part, because of challenges, such as staffing, posed by the locations of the activating institutions. According to BOP officials, delays in receiving congressionally directed activation funding can exacerbate these challenges (see fig.). None of the six institutions is fully activated, or at rated capacity, as they do not house the number of inmates they are designed to safely and securely house. BOP does not effectively communicate to Congress how the locations of new institutions may affect activation schedules. BOP officials said that when directed by Congress to investigate a location, they consider this as direction to focus on construction at that site. DOJ and BOP could more effectively manage activation timelines and costs by using the BOP annual budget justification to communicate to Congress the factors associated with certain locations that can delay activations, such as challenges hiring staff and placing inmates in institutions. Also, BOP officials said they review staffing data system-wide, but they have not prioritized an analysis of such data at the institution level. Analyzing staffing data on institutions in the activation process could help BOP assess its progress in staffing and tailoring effective mitigating strategies. Finally, BOP lacks a comprehensive activation policy to guide activations, as well as an activation schedule that reflects best practices, and it has largely relied on staff's past experience to complete ongoing activations. Developing and implementing a comprehensive policy and a schedule that reflects best practices, could better position BOP to meet its estimated timeframes and activation costs. DOJ purchased Thomson to help reduce crowding among inmates requiring high levels of security. Once it is fully populated, it will reduce BOP-wide crowding by 16 percent at the high-security level. Thomson will cost about $160 million annually to operate once fully activated, adding to BOP's system-wide costs. BOP officials said Thomson will provide benefits, such as high-security bed space, which outweigh the costs associated with the institution. GAO recommends that DOJ use its annual budget justification to communicate to Congress factors that might delay prison activation, and that BOP analyze institution-level staffing data and develop and implement a comprehensive activation policy and a schedule that reflects best practices. DOJ concurred with all of GAO's recommendations.
Air Force and Army cite the increasingly complex training requirements needed to prepare for the ever more lethal battlefield environment as a factor that has led to greater reliance on flight simulators. A flight simulator is a system that tries to realistically replicate, or simulate, the experience of flying an aircraft. Flight simulators range from video games to full-sized cockpit replicas mounted on hydraulic (or electromechanical) actuators and controlled by state-of-the-art computer technology. According to Air Force and Army officials, aircraft simulators are a cost- effective way of helping to develop and refine operational flight skills. Simulators can facilitate training that might be impractical or unsafe if done with actual systems and allow for concentrated pilot practice in selected normal and emergency actions. Simulators also can train operators and maintainers to diagnose and address possible equipment faults, and enhance proficiency despite shortages of equipment, space, ranges, or time. In the late 1990s, the Air Force and Army were faced with increasingly obsolete simulators and the need to quickly acquire up-to-date pilot and aircrew training. In 1997, the then-Commander of the Air Force’s Air Combat Command proposed an innovative approach of buying training as a service, under which the contractors would own, operate, and maintain the simulator hardware and software. The simulator service contracts are one component of a much broader effort, now known as the DMO program. The DMO goal is to provide state-of-the-art simulator training on- demand at the location of the trainee, with the ultimate vision of networking different sites together to create more realistic flying scenarios. Plans call for each fighter unit eventually to be equipped with high-fidelity simulators. As of the fiscal year 2002 budget, the DMO program was formalized in the Air Force budget with the assignment of a program element line item that combined previous program elements for the various simulator systems. For fiscal year 2006, over $200 million was budgeted for the program. In the early 2000s, Army use of rotary-wing aviation simulation training was limited because the simulators being used were grossly obsolete and based on late 1970s’ technology. To revamp its helicopter training, the Army in late 2001 began the Flight School XXI program. Following the Air Force’s lead, the Army decided to acquire up-to-date simulator training using a service contract. Congress and the Office of Management and Budget (OMB) have recently addressed the growing level of procurement of services. For example, Congress included provisions in Section 801 of the National Defense Authorization Act for Fiscal Year 2002 designed to improve management and oversight of procurement of services. To ensure that DOD acquires services by means that are in its best interest and managed in compliance with applicable statutes, regulations, directives, and other requirements, the Act required DOD to establish a service acquisition management structure, comparable to the management structure that applies to the procurement of products. In September 2003, we reported that DOD and the military services had a management structure in place for reviewing individual services acquisitions valued at $500 million or more, but that approach did not provide a departmentwide assessment of how spending for services could be more effective. Also, OMB Circular A-11’s Appendix B, “Budgetary Treatment of Lease-Purchases and Leases of Capital Assets,” was amended in 2005 to require agencies to submit to OMB for review any service contracts that require the contractor to acquire or construct assets valued over $50 million. While these provisions do not apply to the previously-awarded simulator training contracts, future replacement contracts will be covered. All of the Air Force and Army simulator service contracts are funded with O&M funds. O&M funds are typically used for such things as military force operations, training and education, and depot maintenance. The contracts are requirements contracts, meaning that the government, within available funds, shall order from the contractor all the training services specified for each of the aircraft platforms that are required during the effective performance periods. Additionally, each contract contains language limiting the government’s liability in the event the contract is terminated. For example, the Air Force F-15C contract states that the government reserves the right to terminate the contract for its sole convenience and that such termination prior to the issuance of a funded task order shall result in no payment to the contractor of any amount for any work performed or costs incurred. Table 1 provides additional descriptive information for each contract. The military services are relying on industry to capitalize the required up- front investment needed to acquire simulator hardware and software, with the understanding that the contractors will amortize this investment by selling training services by the hour. Each contract establishes operating hours and the hourly payment rates for the life of the contracts, with rates structured to provide the contractor with higher income in the initial years of service. In calendar year 2004, for example, if the F-16 contractor provided Shaw AFB with simulator availability that met 95 percent of the required system elements, the hourly rate would be $5,225, whereas in calendar year 2006, it would drop to $709 per hour. We have previously identified the need to examine the appropriate role for contractors to be among the challenges in meeting the nation’s defense needs in the 21st century. We recently reported that the government’s increasing reliance on contractors for missions previously performed by government employees highlights the need for sound planning and contract execution. The structure of the simulator service contracts was heavily influenced by mid-1990s’ acquisition reform initiatives such as the Federal Acquisition Streamlining Act of 1994 and the Clinger-Cohen Act of 1996. These Acts encouraged agencies to use commercial acquisition procedures as a way to streamline the acquisition process. Differences under commercial versus non-commercial procedures pertain, for example, to the contracting officer’s determination of price reasonableness, the government’s right to inspect and test, and government rights to acquire technical data. Appendix III outlines these and other key differences. The Air Force contracts for simulator training are structured as commercial acquisitions, but the Army’s is not. Army officials told us they could not justify calling the requirement “commercial” because the simulators would be configured to reflect combat helicopters, which do not exist in the commercial market. In August 2005, a DOD Inspector General review of the procurement procedures for the F-16 contract concluded that the simulator service did not meet the definition or intent of a commercial service and recommended that the Air Force not use commercial procedures for the re-competed F-16 contract. The Air Force is using non- commercial procedures for the new contract. To allow for contractor recoupment of up-front investment, the strategy to acquire simulator services envisioned longer duration contracts. This coincided with practices in commercial industry, where long-term relationships between buyer and seller were becoming common. The Air Force and Army adopted this approach by including award-term incentives in the contracts. This incentive can best be described as a variant of an award-fee incentive, where the contractor is rewarded for excellent performance with an extension of the contract period instead of additional fee. Under the award-term concept, an assessment of the contractor’s performance is presented to the term determining official, who unilaterally determines whether to award an extension or a reduction to the contract ordering period. The potential total years of contract performance under the simulator contracts range from 13 to 19.5 years. Appendix IV contains the details of each contract. Award-term incentives are relatively new in government contracting and are not addressed in the Federal Acquisition Regulation (FAR). Several key players are involved with acquisition and use of simulator training. For the Air Force: Air Combat Command: The requiring entity—the user of simulator training services—is located at Langley AFB, Virginia. The command trains, equips and maintains combat-ready forces for rapid deployment and employment. Aeronautical Systems Center: This organization is the acquisition agency for the simulator contracts. Located at Wright-Patterson AFB, Ohio, it manages development, acquisition, modification, and in some cases, sustainment for a wide variety of aircraft and related equipment programs. The center develops attack, bomber, cargo, fighter, trainer, and reconnaissance aircraft for the Air Force. Air Force fighter units: These are the users of the simulator training, which currently is taking place in 10 fighter units. Fort Rucker, Alabama: Fort Rucker is the requiring entity for the Army’s helicopter flight simulator services. It is the home of all Army aviation flight training and the location of the initial training for new aviators, known as Flight School XXI. The types of helicopters used in Flight School XXI training are the TH-67 basic training helicopter, Chinook, Blackhawk, Apache, and the Attack Reconnaissance Helicopter. Unlike the Air Force’s multiple sites, the helicopter simulators provided under the service contract are located at only this one training site, not at each operational unit. Army Program Executive Office for Simulation, Training and Instrumentation: This office’s mission is to provide training, testing, and simulation solutions for soldier readiness. The office is co-located in Orlando, Florida, with the Naval Air Systems Command, which awarded the contract on behalf of the Army. Both the Air Force and Army were faced with obsolete simulators due to decisions to not devote sufficient procurement funds to upgrade existing simulator hardware and software. The decision to buy simulator training as a service allowed use of O&M funds, which would alleviate the need to compete for procurement funds. Further, it was envisioned that service contracts would allow for automatic simulator upgrades to match the changing aircraft configurations, because industry would be responsible for acquiring, operating, and maintaining the simulators and keeping them concurrent. However, the decision to embark on a services approach was not supported by a thorough analysis of the costs and benefits, despite a DOD directive providing that the acquisition of simulators is to be based on an evaluation of the benefits and trade-offs of potential alternative training solutions. The difficulty associated with competition for limited procurement dollars was a key factor in the decision to turn to service contracts for war- fighting training. Frequently, simulators have lost out in this competition and ended up under-funded. In 1997, the Air Force identified simulators for four aircraft—the F-15C, F-16, F-15E, and AWACS—as “obsolete or grossly non-concurrent” due to age, technological obsolescence, and lack of concurrency with operational aircraft. By early 2002, the Army was also faced with non-concurrent helicopter simulators, and field unit commanders were reporting decreased unit readiness. For example, while the goal of the training at Fort Rucker is to produce aviators trained at a proficiency level of two (with level one being the highest), Army officials reported that most of the aviators were leaving school with only a proficiency level of three. These degraded situations existed despite a DOD directive that provides for the military services to ensure that all development, procurement, operation, and support costs for the acquisition of training simulators were programmed and funded. Recognizing the need to keep simulators current with aircraft configurations, particularly as the use of simulators to substitute for live flying hours was rising, the Air Force issued specific guidance on training devices. For example, Air Force Instruction 36-2248, Operation and Maintenance of Aircrew Training Devices, provides that funding be established for simulator modifications concurrently with modifications to the weapon system. Also, Air Force Instruction 36-2251, Management of Air Force Training Systems, provides that the training system receive the same precedence rating as the prime mission system it supports and the same visibility, funding, and documentation. Nevertheless, Air Force funding decisions had not kept flight training simulators for the four aircraft systems concurrent with aircraft configurations. Also in the late 1990s, the Air Combat Command had unexpended O&M flying hour funds available due to flight crew deployments and obstacles in scheduling training. Use of these funds for service contracts would alleviate the need to compete for procurement funds in an increasingly tight arena. The competition for procurement dollars was also a factor for the Army, which noted that the funds necessary to maintain and upgrade its helicopter training simulators had “not competed effectively against other Army operational and logistics requirements.” Air Force acquisition officials conducted market research to determine how civilian airlines acquired flight training. They found “turnkey” training services contracts in place in the commercial airline industry. These officials envisioned that services contracts would provide quicker state-of- the-art pilot and aircrew training and keep up with the rapid pace of technology development by shifting the responsibilities for simulator ownership, operation, and maintenance from the government to the contractor. Further, with the contractor responsible for any development, production, and testing necessary to ready the simulators for use, the Air Force saw that it would be relieved of these multiple acquisition efforts, an important factor given the recently downsized acquisition offices. In addition, a stated benefit of the service contracting approach for simulator training as initially implemented was the streamlining or reduction in government oversight. Since commercial acquisition procedures were used to buy these services, fewer government system reviews were required. When it decided to take a new approach to solve its helicopter simulator concurrency problems, the Army conducted its own market research, solicited business solutions from industry, and conferred with Air Force DMO officials. Neither the Air Force nor the Army thoroughly analyzed the costs and benefits of alternative approaches before pursuing this new approach. DOD’s August 1986, Directive 1430.13, Training Simulators and Devices, provides that the acquisition of simulators be based on an analysis of the training need, the potential use of existing devices to satisfy that need, and an evaluation of the benefits and trade-offs of potential alternative training solutions. A 1999 report to the Air Force on the DMO program also noted the importance of identifying key business factors before embarking on a major acquisition. As a result of the failure to conduct a thorough review of the various alternatives to solving the problem of non-concurrent simulators, decision makers lacked information on the potential cost and benefit estimates that would be encountered should facts, circumstances, and assumptions change. The historical documents we reviewed demonstrate that within the Air Force there was uncertainty about the cost-effectiveness of the service contract approach to simulator training. Although the potential for reduced costs through outsourcing certain responsibilities and eliminating government logistics support were cited in some decision documents, other documents indicated that the service contract approach would not cost significantly more or less than the traditional ownership strategy. Air Force officials told us that a comprehensive study of various options for providing simulator training had been commissioned. However, they have been unable to locate it. In preparing to re-compete the F-16 simulator contract, the DMO program office completed a formal business case analysis in November 2005, in response to a July 2005 congressional request. Air Force officials acknowledged that, if not for the request, the formal business case analysis would not have been completed. The Army completed two business case analyses prior to contracting for simulator services under the Flight School XXI program, but the analyses lacked sufficient detail to provide a thorough examination of the pros and cons of the new approach. The scope of the analyses was limited to determining (1) what length of service contract would be appropriate to justify the large up-front investment required of the contractor and (2) whether projected funding was sufficient to meet program costs in the event the Army was required to follow the traditional acquisition approach. The Army provided us with decision briefings that set forth various options for simulator training, but the documents ruled out all but the service contract approach without providing supporting analyses of the costs and benefits associated with each alternative. Further, the traditional method, where the government bought the simulators, was eliminated as an option due its perceived inability to meet the Flight School XXI 15-month start-up time frame. This schedule eventually slipped more than 10 months with, according to Army officials, no detrimental effect on student training schedules. The briefings do not address the possibility that the 15-month time frame was flexible. Air Force and Army officials told us the new simulators are big improvements over what they had previously. However, the Air Force has faced funding uncertainties using O&M funds for the contracts, and subsequent schedule slippages have resulted in fewer simulator sites activated than planned. In particular, the F-16 simulator training contractor, citing the reduced activations, notified the Air Force as early as May 2001, that it was unable to provide simulator services as originally agreed and wished to restructure the contract. Later, the company cited Air Force funding problems and schedule slips as the basis for claims against the Air Force and notified the Air Force that its financial situation under the contract was no longer viable. The Air Force will let the current F-16 simulator training contract expire in June 2007 and is in the process of re-competing the contract, which will likely result in a training gap for pilots and additional costs to the Air Force. At the locations we visited, officials told us they were pleased with the quality of the simulator training, particularly when compared with the level of training they had in the past. Pilots are routinely surveyed about the training they receive, and officials told us that, generally, the results have been very positive. For example, the Director of Operations for the F-16 mission training center at Shaw AFB told us that the simulation hardware and software are outstanding and that the training received by young pilots is great. Initial training began under the Army’s Flight School XXI contract in November 2005. While all planned simulators have not yet been activated, according to Flight School XXI officials the school is now meeting its training goal and producing aviators with a proficiency level of two, an improvement over the old regime. As of July 2006, the Air Force had 16 training simulator sites operational, as shown in table 2. The use of O&M funds under the service contract approach was intended to overcome the situation the military services had faced in the past, when internal decisions on funding priorities had resulted in inadequate procurement funds being made available for simulators. However, almost from the start of the DMO program, funding has been less than projected. As a result, schedule slippages have occurred for many sites compared with original Air Force requirements set forth in acquisition plans. Army officials told us that, to date, O&M funding for the Flight School XXI program has not been reduced. Army officials committed at the outset to fully fund the contract in accordance with the originally projected funding profile and, to date, the funding level has remained stable. As early as the 2002 budget planning process, Air Force budget requests did not fully fund planned activations, with a total difference between estimated requirements and funding of $524 million over the future year defense plan, as shown in table 3. An October 2000 Air Force “roadmap” report stated that this funding scenario would “severely impact the executability of the current contracted efforts, as well as the entire vision.” Further, other Air Force decisions, in reaction to fiscal constraints and programs viewed as higher priority, have led to additional funding differences. The Air Combat Command sought to mitigate the impact of these funding differences by shifting flying hour funds into the DMO program in 2003. Table 4 depicts some key events pertaining to the program’s funding impacts and the command’s attempts to secure additional O&M funds. Largely as a result of these funding uncertainties, many Air Force mission training centers have been activated significantly behind the planned schedule contained in acquisition management plans. These schedule slippages for AWACs, F-15C, and F-16 are shown in tables 5, 6, and 7, respectively. Air Force officials told us that since most of the original dates were “notional,” meaning that they were not firm requirements, but rather were intended to provide contractors with information about potential mission training center sites, the timely achievement of the schedules was not required. However, contractor representatives told us that their proposals relied upon the planned site activation schedules contained in the contracts, and delays could directly affect their profitability. The Army has twice rebaselined the activation schedules for the Flight School XXI simulators—the TH-67 and the advanced aircraft virtual simulators (AAVS)—as shown in figure 1. In the original contract, the TH-67 basic training helicopter simulators were scheduled to begin operation in December 2004, 15 months after contract award. The Flight School XXI project manager could not provide documentation to support this time frame and, in fact, told us that the flight school could not have been ready for students at that time. The Army subsequently rebaselined the schedule to allow for an 8-month delay. Similarly, the Army revised the AAVS activation schedule—originally set at 18 months after contract award—to allow a 7-month delay. According to the project manager, these delays resulted from a protest of the contract award by a competitor and the contractor’s renegotiation with its subcontractors. The schedule was rebaselined a second time, as shown above, because the contractor was not able to meet the adjusted schedule. The Army agreed to the further slippages in exchange for the contractor’s providing two extra terrain databases as consideration. Despite these schedule changes, the necessary simulators and facilities were ready for the first flight school class in November 2005, in accordance with the final revisions to the contract schedule. The risk the government faces if a contractor fails to perform as expected under the service contracts is heightened because the government does not own anything—the hardware, software, and data rights are owned by the contractor. In the traditional approach, the government would own the hardware and any software or data it had acquired rights to. While there would be no guarantees as to the condition of these items if the contractor had failed to perform, the government would at least be able to provide them to the replacement contractor, who could potentially make use of them under a new contract. The situation the Air Force has faced with the F-16 simulator contract is illustrative of the potential for not only a degradation in training, but also increased costs to the government when contract performance does not occur as planned. From the outset, the Air Force believed that the F-16 simulator contractor’s cost estimate was low, as it was about $70 million less than the government’s estimate. According to Air Force and contractor officials, the reason for the low cost estimate was that the contractor amortized its development costs over all the sites that were planned to be activated rather than the minimum number that were contractually required. When schedule delays occurred and the expected sites were not activated, the contractor reported that it lacked the financial viability to continue work under the contract. In April 2003, the contractor stopped work toward making the simulators concurrent with the aircraft, stating that it considered the tasks beyond the contract scope. Subsequently, it told the Air Force it was not in its best interest to activate additional training sites. The Air Force will allow the F-16 simulator training contract to expire in June 2007 because, according to DOD, the contractor failed to earn enough award-term points to extend the period of performance. The Air Force plans to re-compete the contract. Two aspects of the original contract, awarding it as a commercial acquisition and including an award- term provision, will not be included in the new contract. Because of the time needed to re-compete the contract and for the winning contractor to provide initial training capabilities, the Air Force faces a potential training gap of over 2 years, during which even the current degraded level of F-16 simulator training services will not be available to pilots. In an effort to ensure some level of continued training during that period, the Air Force plans to award a contract for interim service capability at three air bases. This interim capability will be available for block 50 aircraft only. For the block 40 aircraft, the Air Combat Command plans to spend approximately $20 million to refurbish old F-16 unit training devices. These devices are limited in training potential compared to the current level of simulation. The Air Force and the Army are not effectively tracking the return on their expenditure of taxpayer dollars to acquire simulator training services. The extent to which the simulators are being used is either not measured or is measured inconsistently. The government is paying for activities conducted during the simulator development period but lacks insight into what it is actually paying for. Finally, award-term evaluations that were established to encourage excellent contractor performance do not always measure key acquisition outcomes such as simulator availability and concurrency, and can result in additional contract years being awarded for only “satisfactory” performance. The utilization rate is the percentage of available hours the simulators are actually used. The Army is not tracking the extent to which aviators are using the contracted service for Flight School XXI simulators, even though for simulators the Army owns, utilization rates are tracked. Program officials told us that, because the Army is contracting for simulator training to be available, there was no need to track the extent to which the government is using this availability. Without data on utilization rates, the Army has no basis for determining the extent to which it is using the services it is buying. We found that Air Force installations are collecting information on monthly utilization rates, as provided for in a May 1998 Air Force instruction. However, rates at the locations we examined were often far less than the hours the government purchased. For the three AWACS mission training centers at Tinker AFB, for example, we found that, during the 2-year period ending December 2005, monthly utilization rates were frequently reported at less than 50 percent, as shown in figure 2. The Air Force Audit Agency has reported that installations had acquired excess simulator capacity and unnecessarily consumed O&M funds that could have been applied to other mission requirements. At Shaw AFB, for example, the agency found that the Air Force had paid to use the simulator 10 hours a day, but only used it about 6 hours per day over a 4-month period. The underutilization was attributed to missions being either not scheduled or cancelled. Deployment requirements and range training were identified as contributing factors. At Spangdahlem AB, the audit agency reported that the Air Force had contracted for excess hours of simulator availability to provide the maximum flexibility for pilot schedules. As a result, the Air Force paid for enough simulator availability to hold 3,952 training events in fiscal year 2005, even though it needed only 1,982 training events to meet training requirements. Our analysis also found that monthly utilization rate calculations are inconsistent among DMO system sites, even though an Air Force instruction provides guidance on how to calculate and report utilization data. We asked six installation quality assurance representatives how they calculated utilization rates. Four of the six representatives were unaware of the instruction, telling us that they had not received any guidance for calculating simulator use. Several different calculation methods are being used, as described in table 8. Air Combat Command officials told us the reported utilization rates are used to determine whether or when to activate another training center at a site. They also said they are using utilization rate information to determine how many additional “live” flying hours can be moved to the simulators, in particular to alleviate the burden of high fuel costs for aircraft. Because of the very different methods being used to calculate the rates, however, decisions are being made based on non-comparable information. In addition, we found that the Air Force’s instruction for calculating monthly simulator utilization rates could result in overstating the rates, thus overstating the return on the expenditures made. The instruction directs that utilization be reported when any or all devices at a given location are used. Thus, the Air Force can pay to have four simulators available at a site, use only one of the four during a training period, and still report that simulator utilization was 100 percent as opposed to 25 percent of the paid availability. Under the services approach, contractors commit to major investment at the front end, with the return on their investment to come from hourly fees received for providing simulator service. As an additional way to help the contractor recoup its costs earlier, the government added “preparatory” tasks during the development period prior to the start of service. These tasks are defined in the contracts as discrete events, such as site surveys and training capability assessments, that are ordered and paid for prior to the start of service. Payments for these tasks provide the contractor cash flow between contract award and the planned service start dates and give the government a contractual avenue for contract oversight prior to receiving services. We found that the Air Force and Army have little insight into what they are paying for under the preparatory tasks. Although the invoices reflect only the discrete tasks, such as training capabilities assessments, the wide range of invoice amounts—from $91,000 to more than $6.5 million for similar tasks—and our discussions with contractor officials suggest that the government is actually making milestone payments to the contractors for a portion of their up-front costs to acquire and develop the simulators. The original service contract concept for the F-15C, the first simulator contract awarded, had no provision for the contractor to recoup any costs during the development period, which usually lasts more than a year. Figure 3 shows the development period before the start of simulator services and the original hourly rate structure under the F-15C contract. This original approach, according to Air Force and contractor officials, contributed to schedule and certification delays with the F-15C. Air Force officials told us that they had no contractual avenue to obtain insight into the contractor’s performance during the development period and thus were not aware that the contractor had encountered delays in obtaining information from other programs and in determining the complexity of some simulation elements. As a result, full service was not implemented on schedule and certification of simulation service was delayed until after the start of initial service. Further, according to the contractor, it suffered an unrecoverable loss of income during the high-rate, initial service period. Subsequently, based on feedback received from industry, the Air Force changed its approach and incorporated preparatory services into the F-15C contract and all subsequent DMO system contracts to obtain more visibility into contractor activities during the development period. The Army also paid for preparatory services during the development period of the Flight School XXI contract. Our analysis of the Air Force’s payments for preparatory services found significantly disparate costs for site surveys and training assessments, as reflected in tables 9 and 10, respectively. We asked Air Force and Army officials what was specifically included in these preparatory services and how they determined what they received in return for payments made. They told us that the contractors determine what is included and needed for each service at each site. Three of the four contractors we spoke with agreed that funding for preparatory tasks helped defray their development costs. They said that, in effect, they bill for these tasks as milestone payments rather than for the discrete tasks themselves. Thus, they are able to begin defraying hardware and software development costs before the start of services. Officials from the fourth contractor stated that site survey tasks are standard but that there is some leeway in what is to be done for training capability assessments and training capability requirements assessments. With the upcoming re-competition of the F-16 simulator training contract, the Air Force may pay again for the preparatory service tasks in the new contract’s development period, having already spent nearly $42 million on these tasks in the initial contract. Air Force officials told us they cannot assume that potential offerors would make use of the preparatory work the original contractor has performed. In an effort to measure performance and encourage the contractors to perform in an efficient and effective manner, both the Air Force and Army employ award-term incentives. However, while the award-term evaluation areas include pilot and crew satisfaction, they do not always measure the key acquisition outcomes of system availability and concurrency with aircraft upgrades. While the Air Force does include system availability as an evaluation area, it is assigned only 25 to 30 percent of the total score. Concurrency is not included as a separate evaluation area. The Army’s evaluation areas, on the other hand, include concurrency but not system availability. While the Army requires the tabulation and submission of such data as operational availability and training service completion rate, these data are not included in the award-term evaluations. In addition, several of the evaluation areas include assessments of such things as responsiveness to government requests for cost and pricing data for proposed work not in the initial contract. We recently recommended that DOD move toward more outcome-based award-fee criteria that would promote accountability for acquisition outcomes, rather than include criteria such as responsiveness to government customers or the quality of proposals submitted. Table 11 compares the award-term evaluation areas and the weight given to each area. The Air Force and Army both assign the largest weight to “pilot/crew satisfaction.” However, this measure has limitations, particularly when it is heavily relied on to inform award-term decisions. Air Force officials told us that it is in the pilots’ best interests to assign a high rating to this factor; otherwise, they could be viewed as not having received adequate training and could be asked to retake it. Additionally, pilots are frequently hurried in completing their surveys and dash off check marks without much consideration. Also, the distinction between the levels of satisfaction can be blurry. For the Army, for example, if training and support are adversely impacted for an “extended period,” user satisfaction is to be rated as unsatisfactory. However, if the adverse impact occurs “infrequently or temporarily,” it is considered marginal. Because the terms are not defined, the Army cannot be certain that pilots are providing consistent ratings. We also found that, under the Air Force’s award-term plan, contractors can earn an additional award-term year for only satisfactory performance because awarded points are rolled over to the next evaluation period. A contractor with only satisfactory performance in each of five rating areas can receive up to 51 points each year; thus, within 2 years, it can accumulate the 100 points needed for a 1-year contract extension. The F-16 simulator training contractor, for example, which recently notified the government that it could not continue to perform under the contract, received overall award-term evaluations of “very good” for the first two rating periods (May 2002 through July 2003) and “satisfactory” in the third and fourth periods (July 2003 through January 2005) and earned one contract year extension. The Army has taken a different approach; under its award-term plan it is very unlikely that the contractor can be awarded contract extensions for “satisfactory” performance because rollover is allowed only when more than 100 points are earned. Thus, the contractor with only satisfactory performance cannot accumulate enough points for an additional contract year. While service is not yet available, the F-15E simulator training contract, awarded in August 2003, does not include an award-term incentive because, according to the contracting officer, “it doesn’t work.” Contractor officials told us that the subjective nature of the criteria and the manner in which they are applied negate the award term as a performance incentive. Both the Air Force and Army indicated that they are moving away from using award-term incentives on future contracts. The Air Force will not include such an incentive in its re-competition for the F-16 simulator training contract because, according to the DMO director, it has not been found to be a significant motivator to the contractor; experience has shown that withholding payment for poor service is a much more effective tool to induce improved performance. In addition, since a recent statutory provision limits future total contract periods of performance to 10 years, an award-term provision can no longer be used to implement long-term arrangements such as those in place for the existing simulator training contracts. We recently reported that DOD has little evidence to support its belief that award fees improve contractor performance and acquisition outcomes and, in fact, frequently pays out most of the available award fee to contractors regardless of their performance outcomes. We also found that DOD contracts frequently included rollover provisions, where unearned award fee from one evaluation period was shifted to a subsequent evaluation period or periods, thus providing the contractor an additional opportunity to earn previously unearned fee. We recommended that DOD issue guidance on when rollover of award fee is appropriate. A March 2006 guidance on award fee contracts states, among other things, that use of rollover provisions should be the exception rather than the rule and that the decision to use rollover provisions should be addressed in the acquisition strategy, including a rationale as to why a rollover provision is appropriate. Because simulator training had lost out in the internal competition for procurement funds, the Air Force and Army turned to service contracts, expecting that O&M funds would be made available to meet requirements. In the case of the Air Force, this expectation has not materialized and planned site activations have been slowed. In addition, although the Air Force and Army plan to continue with the service contract approach for simulator training, neither supported the decision with a thorough analysis of the costs and benefits of alternative approaches to delivering the training. Finally, the heightened risks associated with increased reliance on contractors to deliver simulator training calls for careful attention to contract management and oversight. Effective and well-managed incentives for motivating performance are especially important. Better government visibility into the contractors’ activities, such as preparatory tasks, during the development period is critical so that the government can understand the basis for what are essentially milestone payments during that phase. In addition, unless utilization rates are tracked in a consistent manner, the government will not know whether it is making the best use of what it is buying. To help ensure that the best approach is used to provide the war-fighter with needed training, we recommend that the Secretary of Defense direct the Secretaries of the Air Force and Army to conduct a thorough analysis of the costs and benefits of using service contracts for simulator training to determine if it is indeed the best approach. The analysis should proactively address potential risks associated with the service contract approach and identify the level of simulator training needed to meet requirements. To help ensure that the required training is provided to pilots, we recommend that the Secretary of the Air Force reconcile the funding level needed for simulator training with the requirements identified in the evaluation of costs and benefits of the service contract approach and take steps to allocate funds accordingly. To help ensure that the incentives motivate contractor performance toward achieving desired training outcomes, we recommend that the Secretary of Defense direct the Secretaries of the Air Force and Army to take the following two actions: Determine whether it is in the government’s best interest to retain the award-term incentive under these service contracts. If the award-term incentive is retained, take appropriate steps to improve the approach by reassessing the areas to be rated and the definitions of performance levels for the various grade categories. For the Air Force, improvements to the approach should include a determination as to whether to continue allowing rollover of award-term points. To help ensure greater transparency into what the government is paying for preparatory tasks during the development phase, we recommend that the Secretary of Defense direct the Secretaries of the Air Force and Army to take the following two actions: Reassess the pricing of any up-front payments made to the contractors during the development period on future replacement or restructured contracts. If retained, take appropriate measures to (1) create an appropriate and transparent contract payment mechanism, separate from the preparatory tasks, if development costs are to be reimbursed; and (2) increase visibility into the percentage of upfront development costs contractors are recouping from these preparatory tasks and development payments. To help ensure that available simulator training for the warfighter is used in the most effective and efficient manner, we recommend that the Secretary of Defense take the following four actions: Direct the Secretaries of the Air Force and Army to determine whether and how simulator utilization can be increased in order to maximize use of taxpayer dollars. Direct the Secretary of the Army to track and record monthly utilization rates on Flight School XXI contracted simulator training in order to have the data necessary to adjust training requirements and contract provisions, as necessary. Direct the Secretary of the Air Force to revise Air Force Instruction 36-2248, Operation and Management of Aircrew Training Devices, to ensure that, for the purposes of reporting utilization rates, the usage of individual training simulators is calculated. Direct the Secretary of the Air Force to ensure that all sites consistently track and report simulator utilization. In written comments on a draft of this report, DOD concurred with all but one of our recommendations. DOD partially concurred with our recommendation that the Army track and record monthly utilization rates on simulators at Flight School XXI. DOD stated that the service contract approach requires only that the vendor meet the programmed student training load. Nevertheless, DOD stated that the contractor is required to submit utilization data and that the data are available for use in future adjustments to the contracting strategy, requirements, or provisions. Our recommendation was intended to encourage DOD to fully understand its student training requirements and to collect the information to decide whether it needs to adjust requirements or contract provisions regarding simulator availability. Whether the utilization rates pertain to individual simulators or the student training load as a whole, we believe that the Army needs to know the extent to which it is actually using the simulator availability it is buying. DOD also offered two corrections to information in the draft, and we made changes as appropriate. DOD’s comments are included in their entirety in appendix II. We will send copies of this report to the Secretaries of Defense, the Air Force, and the Army; appropriate congressional committees; and other interested parties. We make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff has questions concerning this report, please contact me at (202) 512-4841 or by e-mail at shamesl@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. To determine which factors led the Air Force and Army to acquire simulator training as a service contract using operation and maintenance funds, we analyzed historical documents such as acquisition plans, briefings, and decision memorandums. For the Air Force, we interviewed Air Force management, including officials at the Office of the Assistant Secretary of the Air Force, Acquisition; Aeronautical Systems Center (responsible for contracting the simulator training services) and the Air Combat Command (funds and uses the simulator training). For the Army, we interviewed officials at the Office of the Assistant Secretary of the Army for Acquisition, Logistics, and Technology; and Army officials responsible for managing the Army’s Flight School XXI initiative, including officials of the Program Executive Office for Simulation, Training and Instrumentation. We visited Langley Air Force Base, Virginia, to observe F-15C simulator training; Shaw Air Force Base, South Carolina, to observe F-16 simulator training; and Fort Rucker, Alabama, to observe the Flight School XXI helicopter simulator training. Additionally, to evaluate whether the military services adequately justified the new service contract approach, we reviewed the Office of Management and Budget’s Circular A-11, Appendix B, “Budgetary Treatment of Lease-Purchases and Leases of Capital Assets,” and Air Force and Army regulations and guidance regarding business case analyses. We also drew from our prior reviews of Department of Defense systems, in particular our recent review of the Army’s Future Combat System. To assess whether the new approach has resulted in the planned number of simulator training sites being activated, we evaluated contract documents and information provided by the Air Combat Command and Aeronautical Systems Center to compare planned to actual schedule activations. We gathered and analyzed budget data related to program schedules and interviewed program officials. We analyzed contract documents and other program documents from Flight School XXI and discussed the schedule rebaselining with Army officials. We analyzed the Air Force’s request for proposals for the F-16 simulator training contract re-competition to determine whether key differences in the acquisition approach were incorporated. To determine if the Air Force and Army are effectively tracking the return on their expenditure of taxpayer dollars, we analyzed simulator utilization data and military service guidance on utilization rates; analyzed contractor performance measurements, annual evaluations, and award-term plans for the simulator training contracts; and compared preparatory service costs charged to the government under each of the four Air Force contracts and the Army contract. We also interviewed contractor representatives and government officials. The following table shows differences, as set forth in the Federal Acquisition Regulation (FAR), for contractor requirements under commercial versus non-commercial acquisition procedures. A wide selection of contract types is available in order to provide flexibility. (FAR 16.101(a)) Limited contract types are authorized. Agencies shall use firm-fixed-price contracts or fixed-price contracts with economic price adjustment. These contract types may be used in conjunction with an award fee and performance or delivery incentives when the award fee or incentive is based solely on factors other than cost. (FAR 12.207) To implement the Services Acquisition Reform Act of 2003 (contained in Section 1432 of the National Defense Authorization Act for Fiscal Year 2004, Pub. L. No. 108-136 (2003)), a proposed amendment to FAR would expressly authorize the use of time-and- materials and labor-hour contracts for certain categories of commercial services under specified conditions. (FAR Case 2003-027, 70 Federal Register 56318, Sept. 26, 2005.) Inspection and testing Government has right to inspect and test. (FAR 46.102 & 46.202-3) Contracts for commercial items shall rely on contractors’ existing quality assurance systems as a substitute for Government inspection and testing before tender for acceptance unless customary market practices for the commercial item being acquired include in-process inspection. Any in- process inspection by the Government shall be conducted in a manner consistent with commercial practice. (FAR 12.208) Price must be determined fair and reasonable through various proposal analysis techniques. (FAR 15.404-1) While price reasonableness must be established, the contracting officer should be aware of customary commercial terms and conditions when pricing commercial items. Commercial item prices are affected by factors that include, but are not limited to, speed of delivery, length and extent of warranty, limitations of seller’s liability, quantities ordered, length of the performance period, and specific performance requirements. (FAR 12.209) Commercial items are exempt (FAR 15.403-1(b)(3) and (c)(3)) Required for contract award and modifications unless applicable exception, such as adequate competition or prices agreed upon are based on prices set by law or regulation. Threshold for application is $550,000. (FAR 15.403-1 and -4) The contracting officer must consider the following order of preference when a contractor requests contract financing., (a) Private financing without Government guarantee. (b) Customary contract financing. (c) Loan guarantees. (d) Unusual contract financing. (e) advance payments. (FAR 32.106) For purchases of commercial items, financing of the contract is normally the contractor’s responsibility. (32.202-1) However, customary market practice for some commercial items may include buyer contract financing. In these circumstances, the contracting officer may offer Government financing in accordance with the policies and procedures in Part 32. (FAR 12.210) However, government financing provided only to extent actually needed for prompt and efficient performance, considering availability of private financing and probable impact on working capital of predelivery expenditures and productions lead-times. (FAR 32.104) Government financing of commercial purchases is expected to be different from that used for non- commercial purchases. While the contracting officer may adapt non-commercial techniques and procedures for use in implementing commercial contract financing arrangements, the contracting officer must have a full understanding of effects of the differing contract environments and of what is needed to protect the interests of the Government in commercial contract financing. (FAR 32.202-1(c)) Types of payments for commercial item purchases. (FAR 32.202-2) 1. Commercial advance payment: payment made before any performance of work (not to exceed 15 percent of contract price) 2. Commercial interim payment: payment made after some, but not all, work has been performed 3. Delivery payment: payment made for accepted supplies or services, including partial deliveries (FAR 32.001) The Government may acquire technical data and rights in technical data for multiple purposes. Agencies shall strike a balance between the government’s need and the contractor’s legitimate proprietary interest. (FAR 27.4) Generally, the Government shall acquire only the technical data and the rights in that data customarily provided to the public with a commercial item or process. The contracting officer shall presume that data delivered under a contract for commercial items was developed exclusively at private expense. When a contract for commercial items requires delivery of technical data, the contracting officer shall include appropriate provisions and clauses delineating the rights in the technical data in the contract. (FAR 12.211) The Government may acquire computer software/documentation for multiple purposes. Agencies shall strike a balance between the government’s need and the contractor’s legitimate proprietary interest. (FAR 27.402) Commercial computer software or commercial computer software documentation shall be acquired under licenses customarily provided to the public to the extent such licenses are consistent with federal law and otherwise satisfy the government’s needs. Generally, offerors and contractors shall not be required to— 1. Furnish technical information related to commercial computer software or commercial computer software documentation that is not customarily provided to the public; or 2. Relinquish to, or otherwise provide, the Government rights to use, modify, reproduce, release, perform, display, or disclose commercial computer software or commercial computer software documentation except as mutually agreed to by the parties. (FAR 12.212(a)) Compliance generally required for contractors in connection with negotiated contracts in excess of $500,000. Cost Accounting Standards do not apply to contracts for acquisition of commercial items when they are firm-fixed-price or fixed-price with economic price adjustment. (FAR 12.214) Contractors must disclose and consistently follow their cost accounting practices. (FAR 30.101) In determining whether a potential awardee is a responsible contractor, per criteria in FAR 9.104-1, contracting officers may require a preaward survey when the information on hand or readily available is not sufficient to make such a determination. (FAR 9.106-1) If the contemplated contract will involve the acquisition of commercial items, the contracting officer should not request a preaward survey unless circumstances justify its cost. (FAR 9.106-1(a)) When contracting by negotiation, the contracting officer shall insert the clause at FAR 52.215-2, Audit and Records—Negotiation in solicitations and contracts which allows contracting officer examination of costs when cost or pricing data is required or for cost-reimbursement, incentive, time- and-materials, labor-hour, or price redeterminable contracts. (FAR 15.209(b) and 52.215-2) Commercial item contracts exempted. (FAR 15.209(b)(1)(iii)) On contracts for supplies over $10,000, contractors must adhere to provisions pertaining to minimum wages, maximum hours, child labor, convict labor, safe/sanitary working conditions. (FAR 22.602) Not applicable. (FAR 12.503(a)) Contractor must warrant that it has not employed or retained anyone, on a contingent fee basis, to obtain this contract. Not applicable. (FAR 12.503(a)) (FAR 3.404, 52.203-5) Contractor must agree that it will provide a drug-free workplace (FAR 23.504(a)) Not applicable. (FAR 12.503(a)) Contractor must report on its affirmative actions to employ and advance covered veterans (FAR 22.1302(a)) Law’s limitation on use of appropriated funds for contracts with entities not meeting veterans employment reporting requirements is not applicable. (FAR 12.503(a)) Contracts for services must prohibit contractor activities regarding, and require contractor policies to combat, severe forms of trafficking in persons, the procurement of commercial sex acts, and use of forced labor. (FAR 22.1705) Not applicable. (FAR 12.503(a)) Contract clause required providing that contractors employing laborers or mechanics are required to compensate them for overtime. (FAR 52.222-4) Requirements for a certificate and contract clause related to the Act are not applicable. (FAR 12.503(b)) Contract clause requires prime contractors to (1) have in place and follow reasonable procedures designed to prevent and detect violations of the Act; and (2) cooperate fully with any Federal agency investigating a possible violation of the Act. (FAR 3.502-2(i)) Requirements for a clause and certain other requirements related to the Act are not applicable. (FAR 12.503(b)) Contracts must include clause requiring use of U.S- Flag Air Carriers by government contractors when available (FAR 47.405) Requirement for a clause related to the Act is not applicable. (FAR 12.503(b)) Contracts must include clause precluding contractors from restricting direct subcontractor sales to the Government. (FAR 3.503-2 and 52.203-6(a)) Contractors may restrict subcontractors’ sales to the Government, as long as the Government is treated no differently than other prospective purchaser. (FAR 52.203-6, Alternate I) Generally, contracting officer permitted to make unilateral changes within the scope of the contract and to require continued contractor performance of the contract as changed. (FAR 43.201) Changes may be made only by written agreement of the parties (bilateral). (FAR 12.301(b)(3); 52.212- 4(c)) Generally, termination costs for fixed-price contracts limited to total contract price less payments made or to be made under contract plus reasonable costs incurred in performance of work terminated, to include fair and reasonable profit, and reasonable settlement costs. Cost principles and procedures of FAR Part 31 apply to costs. (FAR 49.502(b); 52.249- 2) Termination costs limited to percentage of contract price reflecting percentage of work performed prior to termination plus reasonable charges resulting from termination. For payments thereunder, contractor not required to comply with cost accounting standards or contract cost principles in FAR Part 31. (FAR 12.301(b)(3); 52.212-4(l)) In addition to the individual named above, Michele Mackin, Assistant Director; Marie Ahearn; Christine Bonham; Gary Delaney; Carlos Diz; Benjamin Federlein; Victoria Klepacz; and Sanford Reigle made key contributions to this report.
The Air Force has turned to service contracts for the F-15C, F-16, Airborne Warning and Control System, and F-15E, and the Army has done the same for helicopter simulator training at its Flight School XXI. The contractors own, operate, and maintain the simulator hardware and software. The military services rely on industry to capitalize the required up-front investment, with the understanding that the contractors will amortize this investment by selling training services by the hour. GAO was asked to address (1) the factors that led the Air Force and Army to acquire simulator training as a service and whether the decision to use this approach was adequately supported; (2) whether implementation of the approach has resulted in the planned number of simulator training sites being activated; and (3) whether the Air Force and Army are effectively tracking the return on their expenditure of taxpayer dollars. GAO makes recommendations to the Secretary of Defense intended to improve management and oversight of these service contracts to help ensure that the best approach is used to provide the war-fighter with needed training. In written comments on a draft of this report, DOD concurred with all but one of the recommendations, only partially concurring with one pertaining to the Army's simulator utilization rates. GAO continues to believe that the Army needs to track the extent to which it is using simulator availability. The Air Force and Army turned to service contracts for simulator training primarily because efforts to modernize existing simulator hardware and software had lost out in the competition for procurement funds. As a result, the simulators were becoming increasingly obsolete. Buying training as a service meant that operation and maintenance (O&M) funds could be used instead of procurement funds. Shifting the responsibility for simulator ownership, operation, and maintenance from the government to the contractor was thought to more quickly enable simulator upgrades to match the changing configurations of aircraft. However, the decision to take a service contract approach was not supported by a thorough analysis of the costs and benefits as compared to other alternatives, despite a Department of Defense directive that provided for such an analysis. While Air Force and Army officials told GAO the new simulators are significant improvements over the previous ones, the expected number of Air Force training sites have not been activated. For the Air Force, O&M funds have not been allocated at the anticipated levels, leading to schedule slippages. The F-16 simulator contractor cited the funding problems and subsequent schedule slippages as the basis for notifying the Air Force that its situation under the contract was no longer financially viable. The Air Force is in the process of re-competing the F-16 training contract, which will likely result in a training gap for pilots--possibly over 2 years--and additional costs to the Air Force. The start date of the Army's flight simulator training was rebaselined twice, but Army officials told us that adequate training was in place for the flight school participants. The return on expenditure of taxpayer dollars is not being effectively tracked in three key ways: Air Force utilization of simulator training frequently falls well below the hours for which the government is paying. The Army is not collecting data on utilization rates at all. The government has little insight into what it is paying for during the development period before training is activated, which can take more than a year. While invoices for preparatory efforts reflect only discrete tasks such as training capabilities assessments, the wide range of invoice amounts and GAO's discussions with contractor representatives suggest that the government is actually making milestone payments to the contractors for a portion of their up-front costs to acquire and develop the simulators. Most of the contracts contain award-term provisions, where the contractors can earn an extension of the contract period for good performance. GAO found that the award-term evaluation factors do not always measure key acquisition outcomes such as simulator availability and concurrency with aircraft upgrades.
Trucks handled more than two-thirds of all freight commodities shipped in 2002, according to a recent report for the American Trucking Associations (ATA), an organization representing the majority of freight-hauling companies. Trucking companies that shipped freight earned revenues of about $585 billion, or 87 percent, of the total transportation revenues that year. The total volume of goods shipped by trucks is expected to rise to 10 billion tons by 2008, with trucking companies’ revenues increasing to about $745 billion, according to the ATA report. The majority of trucks transporting freight are powered by diesel engines, primarily because they are 25 percent to 35 percent more energy efficient and more durable and reliable than gasoline-powered engines. Furthermore, diesel fuels generally are less volatile and, therefore, safer to store and handle than gasoline. On the other hand, diesel engines also have an adverse impact on air quality through their harmful exhaust emissions. Diesel exhaust is composed of several toxic components, including nitrogen oxides, fine particles (particulate matter), and numerous other known harmful chemicals. EPA estimates that exhaust from heavy-duty trucks and buses accounts for about one-third of the nitrogen oxide emissions and one-quarter of the particulate emissions from all mobile sources. EPA’s 2002 comprehensive review of the potential health effects from exposure to diesel engine exhaust found that short-term exposure to diesel emissions can cause respiratory irritation and inflammation and exacerbate existing allergies and asthma symptoms. Long-term exposure may cause lung damage and pose a cancer hazard to humans. The harmful components of diesel exhaust can also damage crops, forests, building materials, and statues. The exhaust also impairs visibility in many parts of the country. Although diesel exhaust is harmful, both EPA and engine manufacturers have successfully reduced the level of emissions from highway diesel engines over the past two decades. Since 1984, EPA has progressively implemented more and more stringent diesel emissions standards, for example, reducing the level of allowable nitrogen oxide emissions from diesel engines from 10.7 grams per unit of work in 1988 to 2.5 grams in 2004 (see fig. 1). To meet these standards, engine manufacturers should have made increasingly cleaner engines so that their nitrogen oxide emissions gradually declined to mandated levels. However, EPA determined that, from 1987 to 1998, seven of the nation’s largest diesel engine manufacturers sold 1.3 million heavy-duty diesel engines with computer software that altered the engines' pollution control equipment under highway driving conditions. The Clean Air Act prohibits manufacturers from selling or installing motor vehicle engines or components equipped with devices that bypass, defeat, or render inoperative the engine's emission control system. These devices altered the engines' fuel injection timing and, while this improved fuel economy, it also increased nitrogen oxide emissions by two to three times the existing regulatory limits. In response, EPA undertook what it called “the largest Clean Air Act enforcement action in history” against the manufacturers. To settle these cases, in 1998, EPA, the U.S. Department of Justice, and the engine manufacturers agreed to be bound by consent decrees. In the decrees, the manufacturers agreed to, among other things, (1) pay civil penalties of about $83 million, the largest civil penalty for an environmental violation as of that date; and (2) collectively invest $109.5 million towards research and development and other projects to lower nitrogen oxide emissions. Table 1 includes information on the number of engines that each manufacturer subject to the decrees produced that violated the emissions standards, the amount of nitrogen oxide emissions these engines produced in excess of the amounts allowed by the standards in effect at the time, the amount of penalties each company paid, and the amount of funds each company committed to invest in environmental projects. The manufacturers also agreed to collectively spend $850 million or more to produce significantly cleaner engines by October 1, 2002. The nitrogen oxide emissions from the new engines were not to exceed 2.5 grams. Without the decrees, the engines would not have been required to meet this standard until January 1, 2004, 15 months later. The excess emissions caused by the defeat devices were of concern, especially for states and localities with areas that already had air quality problems (meaning that the areas did not meet at least one of the health- based air quality standards). Every state must devise a plan, called a state implementation plan, that indicates what actions they will take to maintain or come into compliance with the standards. In devising these plans, states and localities estimate future emissions and design actions to reduce them as necessary. If the states and localities do not comply, they face certain sanctions, including the loss of access to federal transportation funds. But the use of the pollution control defeat devices that increased engine emissions jeopardized state air quality improvement plans and posed public health risks. To ease compliance with the accelerated schedule, manufacturers could continue to sell their old engines until October 2002. If manufacturers were not able to, or chose not to, meet the deadline, they could continue to sell engines that did not meet the standards through three actions (1) paying nonconformance penalties, equal to the cost of engines that met the standards, to maintain a “level playing field” between the noncomplying companies and those manufacturers who met the deadline; (2) using a provision that allowed manufacturers to sell noncomplying engines after October 2002 if they sold an equal number of the cleaner engines before that date; and (3) using emissions averaging, banking, and trading to generate emissions credits towards compliance by reducing emissions in other areas. As the next step in its efforts to address diesel emissions, EPA, in January 2001, finalized a rule—herein referred to as the 2007 rule--establishing new emissions standards that heavy-duty engines and vehicles must generally meet beginning in 2007. These standards, unlike the consent decrees established as the result of an enforcement action, were developed through a public rulemaking process that gave stakeholders from across the industry sectors the opportunity to provide input to EPA for consideration. Also in contrast to the consent decrees, the 2007 standards gave industry 6 to 10 years to develop technologies to meet the rule’s requirements. The 2007 rule limits fine particle and nitrogen oxide emissions from heavy-duty diesel engines to 0.01 grams and 0.20 grams, respectively, a significant decrease compared to the consent decrees and 2004 standards. While the fine particle standard is effective in 2007, the nitrogen oxide standard will be phased in based on engine production: 50 percent of the engines sold between 2007 and 2009 and 100 percent of those sold beginning in 2010 must meet the nitrogen oxide emissions standard. EPA estimates that the new standards will reduce emissions of fine particles and nitrogen oxides by 90 percent and 95 percent, respectively, from 2000 levels. Also in the 2007 rule, EPA regulates both heavy-duty vehicles and their fuel as a single system. To meet the standards, engines must include advanced emission control devices. Because these devices are damaged by sulfur, the rule establishes a mid-2006 deadline for reducing the sulfur allowed in highway diesel fuel. Under the rule, refiners are required to start producing diesel fuel with a sulfur content of no more than 15 parts per million (compared to current diesel fuel, which can contain up to 500 parts per million—a 97 percent reduction) beginning June 1, 2006. All diesel-powered highway vehicles produced in 2007 or later must use the low-sulfur fuel. Under certain conditions, and generally only until 2010, the rule allows refiners to continue producing and selling some diesel fuel with a sulfur content greater than 15 parts per million, but not exceeding 500 parts per million. However, the two fuels must be segregated in the distribution system so that the low-sulfur fuel is not contaminated. The fuel with the higher sulfur content may only be used in heavy-duty vehicles built before 2007 because it will damage emissions control devices on newer engines. When developing the 2007 rule, EPA had to give appropriate consideration to the rule’s costs. The agency projected that the rule’s benefits would exceed its costs by a factor of 16 to 1. According to EPA, the new standards will result in significant annual reductions in harmful emissions, with total benefits as of 2030 estimated at about $70 billion. In addition, by 2030, the reduced emissions will prevent 8,300 premature deaths, more than 9,500 hospitalizations, and 1.5 million workdays lost, according to EPA. The agency estimated that these benefits will come at an average cost increase of about $2,000 to $3,200 per new vehicle in the near term and about $1,200 to $1,900 per new vehicle in the long term, depending on the vehicle size. This is relatively small compared to new vehicles whose base cost is about $96,000 for a new heavy heavy-duty truck to $250,000 for a new bus. Furthermore, EPA estimated that, when fully implemented, the sulfur reduction requirement would increase the cost of producing and distributing diesel fuel by about 4.5 to 5 cents per gallon, an increase of about 3 percent over average U.S. diesel fuel prices as of late November 2003. In part because trucking companies did not have what they considered to be sufficient time to adequately road test 2002 prototype engines, they had concerns about the price and reliability of the new engines. Representatives of four of the ten trucking companies we contacted said their companies, among other things, bought more new heavy-duty trucks equipped with older engine technology than planned before October 2002. This adversely affected their operations, at least in the short term, according to company officials. Our analysis of Class 8 truck production data also indicates that trucking companies may have pre-bought these trucks in 2002. To meet the increased demand for trucks with older engines, the major engine manufacturers increased production of new trucks with older engines before October, but had to decrease production when demand subsequently dropped until about early 2003, with detrimental effects, according to representatives of the engine manufacturers we contacted. These manufacturers also said that they lost market share to others that were not subject to the consent decrees or that decided to pay penalties rather than make a new engine on time. EPA estimated that accelerating the schedule for cleaner engines would accelerate emissions reductions, thereby better protecting public health. EPA roughly estimated that two provisions of the consent decrees would reduce nitrogen oxide emissions by roughly 4 million tons. However, as discussed, trucking companies bought more trucks with the older engine technology than planned, and truck owners are now operating trucks longer than expected, thereby reducing the number of trucks with cleaner engines on the road below anticipated levels. As a result, while emissions levels were reduced, the consent decrees will not achieve the full emissions reductions in the time frames EPA anticipated. The consent decrees had an adverse effect on some trucking companies even though the trucking industry was not a direct party to the decrees. They affected the industry because trucking companies are the ultimate purchasers of trucks equipped with new diesel engines designed to meet the consent decrees’ emissions standards requirements. Manufacturers did not provide trucks with prototype engines to the companies in time to sufficiently road test them, according to many of the trucking company officials we contacted. Several officials noted that their companies did not take delivery of trucks with the new engines for testing until the first half of 2002—too late for their companies to perform what they considered to be adequate road testing. Consequently, many trucking companies decided not to risk the uncertainties associated with the new engines, instead opting for the older, familiar diesel technologies. As table 2 indicates, eight of the ten trucking companies we contacted bought trucks with the older engines prior to October 2002, postponed buying new trucks, or bought only a relatively small number of trucks with new engines, usually for testing purposes. Werner Enterprises and Swift Transportation publicly reported in their financial statements to shareholders that they pre-bought trucks with older engines and postponed buying new trucks, respectively, because of uncertainties surrounding the new engines. The two trucking companies in table 2 that bought large numbers of trucks with the new engines did so because they wanted to maintain consistent business relationships with their established engine suppliers and follow the fleet acquisition plans that they had developed based on their assessment of long-term business needs, according to company officials. The four companies that pre-bought large numbers of trucks before the October 2002 deadline did so primarily because they were concerned about the higher price and unproven reliability of the new engines, according to company officials. They said that the new engines would have added from $1,500 to $6,000 to the purchase price of a new heavy-duty truck—whose base cost is about $96,000—and would have reduced fuel economy by 2 to 10 percent. For 2002, these additional costs could have ranged from about $4 million to $27 million per company in purchase price and about $3 million to $90 million per company in fuel costs. These trucking officials said that these additional costs would have been problematic for some companies because, according to one representative, the industry only returns 3 or 4 cents per dollar invested. Compounding these additional costs, according to trucking officials, is that they come without any clear offsetting economic or business advantages. According to several of the officials, recent engine modifications made to meet increasingly more stringent emissions standards also had positive economic benefits for the trucking companies, such as increased fuel efficiency. EPA officials noted, however, that some of these benefits, including better fuel economy, were achieved as a result of engine manufacturers using the defeat devices to avoid meeting emission standards. The agency acknowledged that trucking companies were not party to the engine manufacturers’ tactic but did benefit from it. Companies that pre-bought trucks found this strategy adversely affected their operations, at least in the short term, according to company officials. Companies had more trucks than they needed and lost money as excess trucks sat idle. For example, one trucking company reported in its financial statement to shareholders that such excess capacity cost the company $16.3 million in revenues—29 percent—in the first quarter of 2003. Despite effects such as these, some trucking officials told us that they would have pre-bought even more trucks with the older engines had they been available. These officials noted that while larger companies may have been able to weather these operational disruptions, smaller companies with narrower profit margins might have found it more difficult. Our analysis of data on the production of trucks with the new engines suggests that pre-buying in response to the consent decrees was a widely used strategy. As figure 2 shows, truck production began to increase from January through September 2002, despite a generally decreasing trend since April 2000. More specifically, from April through September 2002, manufacturers produced about 93,000 Class 8 trucks. Our analysis shows that this production volume cannot be fully explained by changes in the economy’s growth rate or diesel fuel prices, but this increase, and the subsequent decrease, in production may be linked to the consent decrees. We recognize that a number of factors other than the consent decrees are also likely to have contributed to these trends. For example, trucking companies’ business decisions are driven by factors that affect their profitability, such as economic growth and activity, their expectations about future profits, their current inventory of trucks, and fuel and operating costs. In addition, other factors such as regulations, taxes, or subsidies affect companies’ profitability and truck purchasing decisions. After considering the information trucking companies provided us on their responses to the decrees and controlling for economic growth and fuel costs in our analysis, we estimate that 19,000 to 24,000 (20 percent to 26 percent) of the 93,000 Class 8 trucks produced during this period may have been in response to the consent decrees. Subsequent to this increase, the data also show that production sharply decreased after October 2002 until recovering in 2003. Those companies that bought trucks with the new engines reported experiencing few serious problems with them, although they generally believe that it is too soon to be certain of the new trucks’ maintenance costs. Some stated that preliminary indications may not be encouraging. For example, one company reported that roughly one-half of its 140 new heavy-duty engines experienced an engine valve failure prior to 50,000 miles. In addition, these officials noted that roughly 20 percent of their heavy-duty vehicles with the new engines are out of service at any given time due to maintenance concerns, compared to 5 percent for the remainder of their fleet. Several of these officials expressed a concern that some companies may have difficulty absorbing increased costs from such maintenance problems. Initially, trucking companies’ increasing demand to pre-buy trucks with older engines in the 6 months before the October 2002 deadline increased the major diesel engine manufacturers' production and sales. In particular, demand was so great, according to some engine manufacturers, they could not keep up with it, despite hiring hundreds of temporary employees and running production lines 24 hours a day, 7 days a week. According to all five of the engine manufacturers we contacted, the pre-buy could have been much larger, but the engine manufacturing industry did not have the capacity to fill the demand. However, once the October 2002 deadline passed, demand for these engines fell dramatically. These dramatic swings in demand had a net adverse impact on engine manufacturers, at least for the short-term, according to those manufacturers we contacted. For example, at least one engine manufacturer laid off all of the temporary employees it had recently hired to meet the rising demand before October, as well as some more established workers. Another manufacturer said that such instability also hindered its ability to make business decisions, acquire capital, and meet customers’ demands. However, figure 2 shows that truck sales generally increased again starting in 2003. In addition to these general trends, many of the manufacturers of the new, cleaner engines told us that they lost customers to those companies that continued to produce engines that did not meet the new emissions standards. In 1998, the seven manufacturers subject to the consent decrees dominated the U.S. heavy-duty diesel engine market, accounting for about 90 percent of engine sales. In response to the decrees, four of the seven engine manufacturers began to produce cleaner engines. Another of the seven manufacturers, Renault, decided to leave the U.S. heavy-duty diesel truck market in 2002, according to company officials. Furthermore, according to EPA, Navistar International chose to take other actions to compensate for its excess emissions rather than meet the new emissions standards early, as permitted under its consent decree. Caterpillar, until November 2003, continued to sell heavy-duty engines that did not fully comply with the new nitrogen oxide standards, but paid a nonconformance penalty for each engine sold. Therefore, by mid-2003, the U.S. heavy-duty diesel engine market was dominated by (1) the four manufacturers subject to the decrees that were selling engines that met the new emissions standards—Cummins, Detroit Diesel, Mack Trucks, and Volvo; (2) two manufacturers subject to the decrees that were selling engines that did not meet the standards—Navistar International and Caterpillar; and (3) Mercedes, that entered the U.S. market in 1999 but that did not have to meet the standards until 2004. In 1998, the year in which EPA and the engine manufacturers entered into the consent decree settlements, the four manufacturers selling engines that met the new standards had a combined share of the U.S. Class 8 truck market of about 73 percent, while the two manufacturers that were not selling such engines had roughly a 27 percent market share. Since then, the market shares of the two groups of engine manufacturers have moved in almost directly opposite directions (see fig. 3). By September 2003, the market share of the four manufacturers selling cleaner engines had shrunk to 50 percent and the share of the two companies—plus Mercedes—that continued to sell engines that did not meet the new standards increased to 50 percent. While factors other than the consent decrees contributed to this shift in market shares over the years, according to many engine manufacturer and trucking company officials we contacted, the manufacturers that sold trucks with the cleaner engines also lost business because, as previously noted, these engines had inherent disadvantages relative to the existing engines that made them difficult to sell. Consequently, manufacturers that continued to market trucks with the older engines captured business from those companies selling trucks with the new engines. For example, Caterpillar’s share of the Class 8 truck market climbed from 24 percent in 1998 to 35 percent in 2003, while Detroit Diesel’s share dropped from 27 percent to 15 percent during the same period. Similarly, Mercedes’ market share rose from zero in 1998 to 10 percent in 2003, while Cummins’ share fell from 31 percent to 21 percent. We were unable to verify all of the claims made by trucking companies and engine manufacturers regarding financial impacts and truck purchase decisions resulting from the consent decrees because much of this information is confidential. To a limited extent, we were able to use financial statements some of these companies submitted to the Securities and Exchange Commission to verify some impacts for some companies. In addition, we conducted econometric analysis to shed light on the possible magnitude of the pre-buy. Although EPA was not required to conduct a cost-benefit analysis of the provisions of the consent decrees, it did a rough estimate of the potential emissions reductions that could be achieved. At the time it made the estimate, EPA used truck production data from 1998, the most recent available at the time, to estimate that over the 15-month pull-ahead period—from October 2002 to January 2004—some 233,000 more trucks with cleaner engines would be on the road than without the pull-ahead. EPA multiplied this number by the amount of emissions reductions a single cleaner engine could achieve to estimate that the total emissions reductions expected by accelerating the schedule was roughly 1 million tons of nitrogen oxide emissions. As previously discussed, because trucking companies postponed purchases, bought new trucks with the old engine technology, or bought used trucks rather than the cleaner engines, initially fewer trucks with cleaner engines will be on the road than EPA had estimated. Therefore, the consent decrees are not going to produce the total 1 million reduction, at least not during the time frames EPA predicted. For example, Class 8 truck production data through October 2003, or 13 of the 15 months of the pull ahead, show that about 148,000 fully or partially compliant heavy-heavy- duty diesel engines are on the road, compared to EPA’s estimate of 233,000 such compliant engines for the entire 15-month time frame. However, some factors came into play that EPA did not anticipate. For example, EPA did not expect Mercedes to enter the U.S. diesel truck market and claim about a 10 percent share, increasing the number of older-technology engines sold. Furthermore, EPA did not expect Caterpillar, with the largest engine sales when EPA developed its emissions estimates, to produce engines that, although cleaner than previous models, did not fully meet the new standards. Finally, the overall rate of engine production during the 15- month period covered by EPA’s emissions estimates is going to be relatively lower than the rate in 1998, the year on which EPA based its estimates. Therefore, not as many cleaner engines were produced as EPA predicted. EPA also estimated that a second provision of the consent decrees—a requirement that computers on older engines be adjusted to better control emissions when these engines undergo regularly scheduled rebuilding— would reduce nitrogen oxide emissions by about 3 million tons over the life of the engines. Under these “low-nitrogen oxide rebuild” provisions of the decrees, when operators brought their trucks in to have their engines rebuilt, engine manufacturers were required to supply kits to adjust computer controls to lower excess emissions. This adjustment is called “reflashing.” While reflashing can be performed without rebuilding the engine, EPA saw this as a convenient time for performing both operations at once. EPA estimated that this provision of the decrees would eventually apply to roughly 856,000 trucks. In addition, a number of engine manufacturing companies initiated incentive programs to encourage truck companies to voluntarily bring their trucks in to have them reflashed. Under the voluntary program, these trucks would be reflashed earlier than if they waited until the engines needed to be rebuilt under EPA's program, thereby reducing emissions sooner. As of September 2003, almost 60,000 trucks had been reflashed under the consent decrees' mandatory program and another 43,000 under the voluntary incentive programs, about 12 percent of EPA’s projected total. Fewer engines were rebuilt than EPA expected because trucking companies are running their engines longer than in previous years before rebuilding or replacing them. As a result, only a small portion of the emissions reductions predicted by EPA from reflashing may be achieved, depending on how many additional engines are adjusted and the rate at which this occurs. Estimating how many of the remaining 740,000 or more trucks will be reflashed under the consent decree provisions is difficult and must take into account the age and likely future mileage of the trucks. Many of these trucks no longer have enough useful life remaining to make rebuilding their engines cost-effective. Nevertheless, the California Air Resources Board and environmental departments in several other states are considering making reflashing of heavy-duty diesel engines compulsory, to try to reduce diesel emissions as much as possible. A number of engine technology and fuel supply and distribution issues must still be resolved to implement the 2007 standards. Most stakeholders who have made significant investments in developing the engine and fuel technology to meet the standards maintained that the issues can be resolved in time. Engine manufacturers we contacted expect to have new engines ready for 2007 and to be able to meet the trucking companies’ time frames for delivering trucks with prototype engines for testing. However, representatives of the fuel industry recognize that there is still work to do to resolve issues about whether (1) low-sulfur fuel will be available in sufficient volumes nationwide and (2) fuel distributors can keep from contaminating it with higher sulfur fuel that damages the emissions control equipment. However, they believe that there is sufficient time to resolve these issues and do not want the 2007 standards delayed. Furthermore, the environmental and health groups we contacted are encouraged by industries’ progress in developing the technologies needed to implement the standards. Given these lingering technology questions, the uncertainty about having sufficient time to test new engines, and the negative economic impact they experienced under the consent decrees, representatives of some of the trucking companies we contacted remain concerned that the new standards can be implemented smoothly. Because the technology to meet the 2007 standard is more advanced than prior upgrades, some trucking companies are concerned that the new engines will cost more and decrease fuel efficiency more than EPA has predicted. Consequently, according to representatives of nine of the ten trucking companies we contacted, companies will likely once again pre-buy trucks, potentially disrupting markets and postponing needed emissions reductions. Representatives of all five engine manufacturers we contacted, as well as the association of emissions control technology manufacturers, noted that control technologies for nitrogen oxide emissions—one of the pollutants addressed by the 2007 standards—have continued to advance. For 2007, manufacturers have evaluated five different engine technology options to control nitrogen oxide emissions—nitrogen oxide adsorbers, selective catalytic reduction, advanced exhaust gas recirculation, a lean nitrogen oxide catalyst, and advanced combustion emissions reduction technology (ACERT—a system developed by Caterpillar for its own engines). Generally, exhaust gas recirculation and ACERT limit the formation of nitrogen oxides, while the catalyst-based approaches promote nitrogen oxides reduction into nitrogen and oxygen. In December 2003, three of the five engine manufacturers we contacted announced the technologies they plan to use to meet the 2007 emission standards: Caterpillar chose its ACERT technology and Cummins and Volvo selected exhaust gas recirculation. In addition, in January 2004, while not specifically saying that it would use exhaust gas recirculation technology, International announced that it plans to meet the 2007 requirements without using either nitrogen oxide adsorbers or selective catalytic reduction. The company currently uses exhaust gas recycling technology in many of its existing engines. The remaining engine manufacturer is considering selective catalytic reduction. Caterpillar, Cummins, International, and Volvo chose their respective approaches because each company is already using a basic form of the technology it selected to meet the 2004 standards and believes it can be modified to meet the 2007 standards as well. Several engine manufacturers, however, believe that they may not be able to advance the exhaust gas recirculation technology far enough to comply with the 2010 requirements, so, in planning ahead, they are pursuing this as well as other options. The firm that is considering selective catalytic reduction noted that this technology could meet both the 2007 and 2010 requirements. It has been in use in the United States for several years to control nitrogen oxide emissions from stationary sources, such as power plants or industrial facilities. It has also been used in European demonstration fleets to control pollution in diesel truck emissions. While the engine manufacturer that is considering selective catalytic reduction believes that remaining technological issues are relatively minor and should be resolved by 2007, it is less clear that several implementation issues will be resolved by that time. For example, selective catalytic reduction requires a continuing supply of a chemical compound—such as urea—to function properly. However, some engine manufacturers and other stakeholders, as well as EPA, are concerned because urea is not widely available and the industry would have to build its own distribution infrastructure, such as separate tanks at refueling stations. There are concerns that this may not be possible by 2007, that truck operators will not have sufficient supplies of the chemical when and where they need it, or that the operators will accidentally or intentionally fail to keep the urea tank on their trucks filled, thereby defeating the emissions control equipment. According to EPA officials, the engine manufacturer considering selective catalytic reduction is expected to submit a plan for a urea infrastructure in early 2004. EPA will evaluate the plan at that time. As for nitrogen oxide adsorbers, EPA has helped to support and develop this technology and believes it remains a viable option for 2007, although none of the manufacturers has chosen this technology for the earlier deadline. In June 2002, the agency issued a report on, among other things, the progress being made to develop this technology. EPA concluded that, given the rapid progress and the relatively long lead-time before it would be used, adsorbers could be available to meet the 2007 standards. In October 2002, the Clean Diesel Independent Review Panel EPA convened to assess technology development progress reached a similar conclusion, stating that although technological challenges remain, none are insurmountable. The panel further noted that engine, vehicle, and emission control manufacturers were making large investments to ensure the successful development and implementation of the adsorber technology for the 2007 standards. In contrast, the engine manufacturers we contacted generally concurred that adsorbers might be a viable option for meeting the next phase of nitrogen oxide reductions in 2010, but they think the technology faces too many significant technical barriers to be a viable option for 2007. Engine manufacturers believe they will have nitrogen oxide control technology ready for 2007 model year heavy-duty trucks and that they can make prototype trucks available to trucking companies for testing by mid- to late-2005. We were unable to independently verify the claims of the engine manufacturers about the progress being made in developing engines and emissions control equipment and when these technologies are likely to be available. This is primarily because companies were concerned about not making information about their unique engine designs and progress readily available so that they can remain competitive. The representatives of the diesel fuel industry we contacted—including officials of nine organizations collectively representing refiners, pipeline operators, terminal operators, and retail marketers—still have a number of concerns about implementing the new emissions standards on schedule. But, they believe they can resolve these issues before 2007. Regardless of their concerns, the representatives agreed that EPA should make no changes to the 2007 rule’s implementation dates and low-sulfur diesel fuel requirements because changing or delaying the rule would negatively affect the plans and investments already being made. Rather, these representatives believe the certainty the 2007 deadline provides, such as knowing what is required, is key to successfully implementing the standards in a timely and cost-effective manner. The representatives of the fuel industry organizations we contacted said that most of their members' efforts to meet the low-sulfur diesel fuel requirements are still in the planning phase. While the industry has the technical ability to produce fuel to meet the requirements—low-sulfur fuel is already being produced in limited quantities today—the fuel industry remains concerned about supply and distribution issues that could directly hinder implementing the requirements (see table 3). As table 3 shows, the fuel industry’s primary concerns include the high probability that low-sulfur fuel supplies will be contaminated before they reach the market or retail level and the potential for shortages of the low- sulfur fuel. The concern over possible contamination of the fuel arises from the limited experience with these products. If such fuel is contaminated, it will damage emissions controls. Although the 2007 rule requires fuel refiners to produce diesel fuel containing no more than 15 parts per million of sulfur, delivering such fuel to the end user may require refiners to produce fuel with an even lower sulfur content. Sulfur from other fuel products may unintentionally be added to low-sulfur supplies through contamination in the distribution system. For example, a pipeline carries many different fuel types, grades, and compositions to accommodate product demands that vary both regionally and seasonally. As a result, there is always a certain amount of intermixing between the first product and the second at the point in the pipeline where the two meet. If these products have different sulfur contents, the mixture where the two fuels meet may contain much more sulfur than the lower graded of the two products. Furthermore, products containing large amounts of sulfur may leave residual amounts in the system that could become blended into other products, raising their sulfur content. Therefore, according to fuel industry representatives, fuel leaving the refinery must have a much lower sulfur content than 15 parts per million to allow for an increase through contamination. Because the extent of the contamination cannot be precisely predicted in advance, the exact sulfur level of the fuel that refineries would have to produce is uncertain. Pipeline operators expect that refiners will have to provide diesel fuel with sulfur levels as low as 7 parts per million in order to compensate for possible contamination from higher sulfur products in the system. However, even at these lower levels, the nine fuel industry representatives said that the likelihood of contamination during the delivery of the fuel through the distribution system is extremely high. Even if the low-sulfur fuel that pipeline operators receive meets their specifications, pipeline operators are unsure how they will sequence the new fuel with other products in the pipeline to prevent its contamination. Once contamination occurs, the product could no longer be sold or used as low-sulfur highway fuel, thereby leaving less of the low-sulfur fuel available for sale. Fuel distributors also said that the potential for contamination increases when a fuel additive such as kerosene is blended with diesel fuel. Kerosene is commonly added to highway diesel fuel in the northern United States to prevent fuel from thickening in the cold weather. Although the 2007 rule requires that additives must meet the same low-sulfur standard, refiners are not currently producing low-sulfur kerosene. Fuel industry representatives also are concerned about the adequacy of testing to detect and avoid widespread contamination of low-sulfur fuel supplies. According to these officials, testing is crucial in determining whether the low-sulfur fuel is meeting the standards at every point in the distribution system. Product testing is performed to control contamination and to define “cut points,” locations in a stream of products through a pipeline where one type of product, such as high sulfur diesel, ends and another product, such as low sulfur diesel, begins. Early detection of contamination gives pipeline and terminal operators flexibility in correcting problems before large portions of a product batch become ruined. However, eight of the fuel industry representatives we contacted expressed concern that a reliable and accurate test or testing device for measuring sulfur content is currently not available. Because of these contamination issues, nine fuel industry representatives expressed concern about whether there would be an adequate supply of the low-sulfur fuel nationwide during the phase-in period from 2007 to 2010. For example, because adding separate storage tanks for low-sulfur fuel to prevent contamination would be expensive, terminal operators and retail marketers said they may be less likely to make the investment to carry this fuel. Furthermore, according to fuel industry representatives, trucking companies that deliver low-sulfur fuel may need to dedicate trucks exclusively for this purpose to ensure product integrity during delivery. This may lead to fuel shortages, which could be especially severe in the northern United States where fuel distribution is generally limited to delivery by truck. In contrast to several of the fuel industry’s concerns, an EPA report summarizing data on refiners' plans to produce low-sulfur diesel fuel before 2010 stated that (1) the fuel industry is on target for complying with the low-sulfur fuel standard and (2) low-sulfur diesel fuel production will be sufficient to meet demand and the fuel will be available nationwide. Although EPA acknowledges in its report that the information is preliminary, the agency believes that it provided the clearest snapshot of the highway diesel fuel market available at the time. According to EPA, the agency will update this report in 2004 and 2005 based on the most current data from the refiners. Despite their differing views on the progress towards meeting the 2007 rule's requirements, fuel industry representatives agree there is still sufficient time to resolve their concerns. One of the representatives stated that, even without knowing how much the fuel is likely to be degraded through contamination, refineries are designing their plans and getting their budgets approved to make the needed modifications to their facilities. The representatives of the five environmental and health groups we contacted are generally encouraged by industries’ progress in developing the technologies needed to implement the 2007 rule. While all five groups commented on the 2007 rule when it was proposed in 2000, three of the groups’ representatives also were members of EPA’s Clean Diesel Independent Review Panel and assessed the industry’s progress in developing the needed technologies. In its 2002 report, the panel concluded that significant progress had been made and, although some challenges may remain, none were considered to be insurmountable. The fourth group’s representatives have been involved in a number of pilot projects with states, local governments, and the private sector involving the use of innovative emissions control technologies. Those experiences, in conjunction with their involvement in commenting on the proposed 2007 rule, have led the group to believe that the technology is viable. Finally, based on information gathered from emissions control equipment manufacturers, the fifth group’s representative believes that the technology is progressing well. All of the representatives said that they are highly supportive of the 2007 standards. Although two of the five groups initially wanted the standards to be implemented fully in 2007 rather than phasing them in through 2010, none of the groups wanted any changes made to the rule now. In fact, the only concern the representatives we contacted expressed was that there would be a delay in the rule’s implementation, resulting in a reduction of the anticipated environmental and health benefits. For example, the representative of the State and Territorial Air Pollution Program Administrators/Association of Local Air Pollution Control Officials stated that the diesel emissions reductions expected from timely implementation of the 2007 standards are critical to state and local air pollution control agencies’ efforts to meet air quality standards. According to this representative, achieving these emissions reductions is especially important for states and localities with areas that already have air quality problems. Many of these areas are relying on the 2007 standards to achieve their expected emissions reductions on time. Trucking officials we contacted expect that the costs of purchasing and operating trucks meeting the 2007 standards will be significantly higher than comparable earlier models, despite EPA’s estimates to the contrary. These officials said they do not consider EPA’s analysis credible, primarily because they believe the agency previously had seriously underestimated the industry’s costs to comply with the consent decrees. For example, EPA’s regulatory impact analysis for the 2004 emissions standards concluded that the industrywide cost to reduce nitrogen oxides would be about $224 per ton. Subsequently, in 2000, EPA estimated that to comply with the pull-ahead provisions of the consent decrees, these costs could increase to $272 per ton. However, an industry analysis stated that the actual cost could range between $8,000 and $13,000 per ton. EPA officials, in commenting on the cost variance of its estimates pointed out that the estimates it developed for the 2004 standards and its estimates of engine costs to meet the accelerated deadline for development are not comparable. Accelerating the schedule would generate additional costs that would not have been components of the 2004 estimate. For example, EPA officials noted that when the agency derived its estimates of costs to comply with the 2004 nitrogen oxide standards, it did not know that heavy- duty engine manufacturers had installed defeat devices on existing engines. Thus the actual cost to comply with 2004 standards will include the cost to “catch up” with the previous standard. We did not assess the accuracy of EPA’s cost estimates. Nevertheless, the difference in EPA’s estimates has raised concerns among trucking company officials about the accuracy of EPA’s 2001 estimate of engine costs to comply with the 2007 standards. One reason many industry officials that we contacted expect the compliance costs of the 2007 standards to be higher than EPA’s prediction is because the new trucks will incorporate significant technological advancements over current equipment to control nitrogen oxide emissions. Many of these officials believe this technology will add thousands of dollars to the purchase price of new trucks rather than the long-term $3,200 estimated by EPA. In addition, these officials are concerned that the 2007 trucks will experience another 3 to 5 percent loss in fuel economy—added to the 3 to 5 percent loss resulting from the consent decrees—that could increase their companies’ fuel costs by millions of dollars per year. Even minor increases in business costs can have adverse effects in the trucking industry, according to trucking industry officials we contacted, because these companies’ profit margins are very narrow—sometimes only 2 cents per dollar earned. The officials claim that the highly competitive nature of the trucking business precludes companies from passing such significant cost increases to their customers. For example, the two trucking companies we contacted that bought only trucks with the new engines prior to October 2002—and in so doing incurred millions of dollars in additional expenses, according to company representatives—said they had to compete against companies that pre-bought trucks with the older engines and avoided the additional expenses. These two companies felt they could not increase the fees they charged without risking the loss of customers to their competitors. According to officials of these two companies, even large, profitable companies can afford to absorb these losses for only a short time, and small- and mid-sized companies are likely to have also experienced difficulties. None of the engine manufacturers could estimate with precision the amount that acquisition or operating costs are likely to increase. However, all of the engine manufacturers we contacted agreed that the engines and emissions control equipment for 2007 trucks will be more expensive to buy and to operate than comparable previous models. By February 2004, four of the five engine manufacturers had announced the technologies they planned to pursue for 2007 and all five had stated their plans to have limited numbers of prototype engines available for road testing by mid- to late-2005. However, some trucking companies still had doubts as to whether engine manufacturers would actually deliver prototypes for road testing in the promised timeframes. For example, one trucking company told us that the original timetable, which would allow engine manufacturers to stay on schedule to deliver prototypes no later than mid-2005, was for the manufacturers to select their technologies during the summer of 2003. The 6-month delay added to his concern about the availability of prototypes to enable valid field evaluations by mid-2005. According to 7 of the 10 trucking firms we contacted, they need 18 to 24 months to put a sufficient number of miles on heavy-duty trucks—under a variety of driving conditions through all four seasons of the year—to fully evaluate the vehicles’ operating costs, performance, reliability, and durability. Officials at all ten trucking companies said that they were reluctant to take the risks associated with the new technologies unless they have enough time to fully assess the new trucks. For example, officials at one company noted that it has only 12 maintenance facilities nationwide and when a truck breaks down on the highway, it is very expensive to repair. Consequently, these officials are not willing to take a chance on equipment that has not been adequately tested. Without adequate testing time, the trucking company officials we contacted believe that they and other trucking companies will likely pre-buy trucks with older engines before 2007, with more companies purchasing more trucks than they did before the consent decrees’ October 2002 deadline. Even officials from one of the trucking companies that bought only trucks with new engines in 2002 said that they would consider pre-buying if the new equipment is not fully tested. According to most of the trucking industry officials we contacted, the adverse impacts of a pre-buy on trucking companies and engine manufacturers could be worse in 2007 than in 2002. Many of the trucking companies we contacted agreed that the industry needs to have the cost, reliability, and other uncertainties associated with the 2007 trucks resolved in order to achieve greater stability within the industry. In late February 2004, we again contacted all ten trucking companies to determine the extent to which the engine manufacturers’ announcements that test vehicles would likely be available in 2005 may have eased their concerns regarding the introduction of new engine and emissions control technologies in 2007. Of the five companies that responded to our inquiries, one stated unequivocally that the engine manufacturers’ announcements had not at all reduced its concerns. Representatives of the remaining four companies stated that their levels of concern had been somewhat reduced by the announcements, but they continue to be concerned about a number of unresolved issues. For example, despite engine manufacturers’ assurances, companies continue to be concerned about the durability of the new engines as well as the cost of purchasing and operating them. In addition, representatives of some of these companies questioned whether the availability of a relatively small number of test vehicles in a limited number of fleets could provide sufficient information to allay the concerns of the trucking industry as a whole. Finally, some trucking companies highlighted lingering concerns regarding potential shortages and higher costs of low-sulfur diesel fuel. EPA has taken a number of steps to help with and monitor the engine and fuel technology development. For example, EPA staff continue to meet with representatives of the key industries, issue reports on technology progress, and conduct stakeholder workshops. Representatives of some of the engine manufacturers, the emissions control technology manufacturers association, the fuel industry, and the environmental and health groups we contacted commended EPA's efforts for helping to advance the needed technologies. However, some of the engine manufacturers and the trucking companies we contacted would like more help and reassurance that the technology will be ready when needed, including economic incentives to manufacturers to produce engines on time and trucking companies to buy them as scheduled. Furthermore, some trucking representatives believe that EPA has not included them in, or listened to their concerns about, implementation of the standards. EPA program managers maintain that the agency has given the industries more lead-time than required to produce the technology and provided extensive assistance and monitoring. They stated that the agency could take a number of additional actions if the standards cannot be implemented on time, such as granting individual companies temporary relief from the standards or postponing active enforcement. But EPA sees no evidence that timely implementation of the standards is not achievable. According to EPA, the agency is not required to ensure that the engine and emissions control technologies or low-sulfur fuel supplies will be available on time or that the industries comply in a timely manner. However, the Clean Air Act requires that EPA establish standards taking into consideration the availability and costs of technology, lead-time, and other factors. In responding to the act’s requirements, EPA concluded that all of the evidence indicates that industries can and will implement the engine and fuel requirements of the 2007 rule successfully and in a timely manner. According to EPA, the technologies for meeting the standards are well known and some are already in use. For example, refineries are now using technology to reduce sulfur in diesel fuel and engine manufacturers are installing filters that reduce fine particle emissions from engines. In addition, the technologies for meeting the nitrogen oxide standard in the 2007 rule are being developed at a rate faster than anticipated, according to EPA, and the remaining engineering issues are being addressed. EPA’s confidence is based, in part, on provisions that the agency built into the 2007 rule to ease compliance. For example, in developing the rule, EPA gave the industries 6 to 10 years to plan, develop, and produce fuel and engines that meet the requirements. By comparison, the Clean Air Act only requires EPA to allow no less than 4 years of lead-time for regulated entities to develop any new technologies required to comply with a rule. EPA also included hardship and other provisions to address problems that certain small businesses may have in complying with the rule. In addition to specific rule provisions, EPA continues to take steps to monitor the development of needed technologies and fuel supplies and to ensure that the standards will be successfully implemented. These efforts include: Technology Progress Review Meetings - According to EPA, agency representatives have continuously met with diesel engine manufacturers, emissions control equipment producers, oil refiners, refinery technology companies, and fuel distributors; visited technical research centers; and met with leading engineers and scientists from more than 30 companies for briefings on the progress being made to comply with the 2007 standards. Progress Review Reports - In the preamble to the 2007 rule, EPA committed to issuing a progress report every 2 years on the status of nitrogen oxide adsorber technology, the emissions control technology, which the agency believes to be the most promising for meeting the standards. The first report, issued in June 2002, concluded that the engine manufacturers and the emissions control equipment industry’s efforts to develop this technology were progressing rapidly and on schedule. The report also included an update on the status of filters to control particulates and the refining industry’s progress towards meeting the low sulfur diesel fuel requirements for 2006. The report did not include supporting technical evidence from each company to validate EPA’s conclusions. EPA plans to release its second engine progress review report in early 2004. Refiners Pre-Compliance Reports - The 2007 rule requires fuel refiners and importers to submit annual reports from 2003 through 2005, which must contain information on, among other things: (1) an estimate of the volumes of low-sulfur and higher-sulfur diesel fuel that each refinery plans to produce or import; and (2) engineering plans, the status of efforts to obtain any necessary permits and financial commitments for making the necessary refinery modifications to produce low-sulfur fuel, and construction progress. EPA summarized these data and issued its first annual report in October 2003, stating that the industry is on target for complying with the low-sulfur fuel requirements on time, fuel production will be sufficient to meet demand, and low-sulfur fuel will be widely available nationwide. EPA plans to issue additional precompliance reports in 2004 and 2005. Implementation Workshops - EPA has held public workshops on the 2007 standards and plans to hold additional ones in the future as appropriate. In November 2002, EPA sponsored a clean diesel fuel implementation workshop, which focused on issues such as record keeping and reporting requirements for the fuel industry and diesel fuel refining, distribution, storage, and marketing challenges. In addition, in August 2003, EPA, the trucking industry, and engine manufacturers co- sponsored another implementation workshop to facilitate the exchange of information among EPA, engine manufacturers, and other parties including truck manufacturers and truck operators, and to give EPA a forum to provide additional guidance on implementation issues. Clean Diesel Independent Review Panel - As previously discussed, at EPA’s request, the Clean Air Act Advisory Committee’s Clean Diesel Independent Review Panel—an expert panel composed of representatives of engine and emissions control equipment manufacturers, trucking companies, fuel refiners and distributors, and environmental and health organizations—independently assessed industries' progress towards complying with the 2007 rule. In its October 2002 final report, the panel found that both the engine and fuel industries were developing the technologies needed to comply with the 2007 standards at an appropriate rate, but that these industries needed to address a number of technical issues for implementation to be successful. The panel agreed that none of these issues was insurmountable and that, for a number of these issues, EPA’s planned implementation workshops were an appropriate means to move forward. Guidance Documents - In November 2002, EPA issued guidance on engine manufacturers’ testing procedures to determine whether their engines comply with the new standards, and the agency also issued a draft document responding to questions raised by the fuel refining and distribution industries during the workshop held earlier that month. EPA plans to issue additional guidance on implementing the 2007 standards, if needed. Other Technology-Related Activities - According to EPA, the agency has taken an active role in a number of areas regarding technology development and information-sharing with the diesel engine industry and other stakeholders, including: an on-going testing program at EPA’s National Vehicle and Fuel Emissions Laboratory in Ann Arbor, Michigan, in which EPA has evaluated the status of engine and emissions control technology, including particulate filters and nitrogen oxide adsorber catalyst technologies. EPA believes that this program helps to inform the agency of the current state of these technologies and allows EPA to make general information on technology progress publicly available. two government/industry technology demonstration programs sponsored by the Department of Energy: the Diesel Emission Control-Sulfur Effects Project, completed in 2001, which primarily focused on the impacts of diesel fuel sulfur on emission control technologies; and the Advanced Petroleum-Based Fuels-Diesel Emissions Control Project, which focuses on developing and demonstrating engine and emissions control systems that can comply with the 2007 standards. a number of industry-sponsored task groups, including (1) the Diesel Engine Oil Advisory Panel, made up of the American Petroleum Institute, the American Chemistry Council, the American Society for Testing and Materials (ASTM), and a number of individual oil, engine, and additive companies, which is developing voluntary standards for engine oil formulations for the 2007 engines; and (2) the Diesel Fuel Lubricity Task Force, sponsored by ASTM, which is working to develop fuel test methods and specifications. EPA participates in these groups to provide input on technical issues and clarification on the 2007 rule, and to track the industry’s progress. Other Outreach Activities - EPA has participated in numerous conferences and meetings sponsored by a wide range of stakeholders at which agency officials have made presentations discussing the 2007 rule. EPA believes that these conferences are useful (1) for stakeholders to get the latest information on the status of the 2007 rule implementation and (2) for EPA to answer questions about the rule and hear first-hand input from the regulated industry and other stakeholders. Based on all of these activities, EPA maintains that industries will successfully implement the requirements of the 2007 rule on time and that, beyond the agency’s planned workshops and other monitoring and outreach activities, it needs to take no additional actions to ensure timely compliance. In general, a number of stakeholders we contacted—the association of emissions control equipment manufacturers, a number of the fuel industry representatives, the environmental and public health groups, and two of the engine manufacturers—either commended EPA for its efforts to ensure the needed technology is ready on time, or believe the agency is already doing enough to provide such assurances. Two of the remaining engine manufacturers and some fuel industry representatives, as well as all of the trucking companies, would like more help in developing the technology or proof that it is on track. The association of emissions control equipment manufacturers praised EPA for its efforts to assist in the development of the needed technology. In addition, many of the fuel industry representatives we contacted commended EPA’s efforts to reach out to them and actively involve them in preparing for the implementation of the 2007 standards. In particular, the representatives found EPA’s implementation workshops and its draft question-and-answer document to be the most helpful. Representatives of the five environmental and public health groups commended EPA’s efforts to implement the 2007 standards and to include them and other stakeholders in the implementation process. Specifically, the groups said that EPA’s outreach efforts were comprehensive and inclusive. Not only did EPA solicit comments from as many stakeholders as possible during the rulemaking process, but it also has continued to encourage discussions between the stakeholders at its implementation workshops. Generally, the groups agreed that EPA does not need to go beyond its current and planned activities to ensure timely implementation of the standards. As for the five engine manufacturers, representatives from one found EPA’s efforts to be particularly supportive and representatives from two others said the efforts were “somewhat” effective in easing development of the needed technologies. Officials from one of these manufacturers said that EPA has been responsive to the manufacturers’ questions, all of which should help them meet the 2007 standards. Representatives from another manufacturer stated that EPA has been diligent in monitoring the progress of engine development, visiting suppliers as well as the engine makers’ facilities, which has helped speed the development of the engines. The agency’s work in its Ann Arbor, Michigan, research laboratory has also helped in this regard. In contrast, officials from a fourth company noted that EPA had not been particularly responsive to the industry or its concerns. (The remaining manufacturer’s representatives did not express an opinion in this regard.) On the other hand, two engine manufacturers described workshops sponsored by EPA that focused on complying with the 2007 rule as only marginally effective. For example, one engine manufacturer’s officials commented that the workshops appear to be "staged” and convened only to confirm the agency’s preconceived ideas, although EPA noted that members of the trucking and engine manufacturing industries co- sponsored these workshops, and that would make it difficult for the agency to preordain their outcomes. These companies’ officials further stated that they did not need EPA’s help in developing new diesel technologies, but did need the agency’s assistance in convincing customers to buy the trucks with the 2007 engines, however. Four of the five manufacturers also asserted that economic incentives for trucking companies could assist them and facilitate the implementation of the 2007 rule. In general, officials from both of these industry groups favored tax breaks or subsidies for trucking companies to purchase the new technologies on time. According to these officials, investing millions of dollars in developing or buying new, relatively unproven equipment carries an inherent business risk and provides companies with a powerful incentive to stay with older, familiar— and dirtier—equipment. EPA officials told us that the agency would have to request authority from the Congress to provide industries with economic incentives. As for other stakeholders, representatives of the terminal and marketing segment of the distribution industry, in particular, were disappointed that the Clean Diesel Independent Review Panel addressed only technology issues and not distribution issues, such as contamination. Furthermore, all of the trucking companies we contacted agreed that EPA could do more to address the uncertainties facing their industry, and thereby help minimize any pre-buy that might occur. In particular, while EPA actively involved them in developing the 2007 rule, they believe that the agency has not addressed their concerns in implementing the standards. For example, according to ATA officials, EPA did not initially include representatives of the trucking industry in the agency’s Clean Diesel Independent Review Panel, and invited ATA to participate only after the organization complained about being excluded. EPA acknowledged that, in retrospect, they should have included trucking industry representatives on the panel from the outset and responded by adding an ATA representative to the panel. Furthermore, ATA officials told us that the panel’s review did not include several important technical issues, such as consideration of alternative emissions control technologies, and that panel members were discouraged from raising such issues. Finally, the ATA officials said that several panel members published reports dissenting with the panel’s main conclusion that technology development was on schedule, but that EPA has not made these reports generally available. As a result of these factors, ATA officials said they do not have great confidence in the panel’s findings and they remain largely unconvinced that trucking companies’ interests have been well represented in EPA’s panel process. According to EPA officials, however, panel membership was comprised overwhelmingly of experts on engine and vehicle technology development. Some trucking companies are also skeptical of the effectiveness of EPA’s other efforts to monitor and assist the development of technology for the 2007 rule. For example, several trucking company officials we contacted believe EPA has already made important implementation decisions— largely without input from trucking companies—and the workshops’ main function is merely to validate those decisions. Several trucking companies and ATA officials expressed the belief that EPA’s overall approach to implementing the 2007 rule is too inflexible. For example, the ATA officials maintain that EPA’s analysis supporting the 2007 rule dramatically understates trucking companies’ costs to comply with the rule and ignores the possible severe effects of these costs on the companies. ATA representatives have recommended that EPA update its analysis to take into account better information that is now available. However, EPA officials continue to believe that the regulatory impact analysis it prepared in support of the rule is sufficient, and pointed out that the agency is not required to, and does not routinely, update its analysis supporting such rulemakings. They also maintain that engine manufacturers, not trucking companies, are the entities being regulated under the 2007 rule. As a result, following the rulemaking, most of the EPA’s direct dealings were with engine makers, not trucking companies, according to these officials. However, they said that, more recently, EPA has actively consulted trucking companies. The trucking companies would like EPA to work more directly and closely with them, hear and address their concerns, and provide more reassurance that the technologies will be ready by 2007. According to EPA, the agency is not required to take action in the event that the engine and emissions control technologies and low-sulfur fuel are not available in time to implement the 2007 standards as scheduled. However, according to EPA, if circumstances arise that would require additional action, the agency will address them at that time. EPA believes that timely implementation of the 2007 standards is achievable and to plan for failure to meet the deadline would undermine the rule. EPA maintains that the collective efforts of the industries to develop plans and technologies needed to meet the standards, combined with the agency’s monitoring of their progress, is the proper course of action at this time and is showing significant positive progress towards timely and successful implementation. According to EPA, entities that are being regulated have for decades developed technologies and implemented requirements based on the certainty that the regulations would not be changed in a way that would disrupt their planning and investment. With this in mind, EPA maintains that it would not be prudent or good government to change the regulations or delay their implementation. According to EPA, the agency’s efforts to provide the industries significant lead-time for developing the needed technologies, ensure that all stakeholders are actively developing them, and monitor their progress are the most prudent actions the agency can take. According to EPA, if it appears that industries cannot comply with the 2007 standards on time, the agency would not readily make substantive changes to the rule—such as modifying the implementation dates or changing the allowable emissions levels of the standards—because industries have invested large amounts to comply with the standards in the specified timeframe. Nevertheless, EPA officials point out that, if there was convincing evidence that modifying some aspect of the requirements was justified and necessary, the agency could take a number of actions: EPA could revise the rule in response to a specific petition. Under the Clean Air Act, any person can petition the EPA Administrator to change a rule. The petition must demonstrate that it was impracticable to raise the objection during the public comment period when the rule was composed and that the objection is of central relevance to the outcome of the rule. EPA believes that the appropriate mechanism for substantively changing the 2007 requirements would be to undertake a standard rulemaking process in response to a petition, in which the agency would post a notice of rulemaking in the Federal Register and request, review, and address public comments on the proposed revisions to the rule. EPA could also develop nonconformance penalties in the event that one or more engine manufacturers was unable to produce compliant engines, as it did for the 2004 standards and consent decrees. EPA establishes nonconformance penalties when: (1) the emission standard is more stringent than the previous standard or an existing standard becomes more difficult to achieve because of a new standard, and if EPA finds that it will require substantial work to comply; and (2) it is likely that one or more manufacturers will be a “technological laggard,” unable to produce compliant engines by the required date. Typically, EPA decides whether to establish penalties 1 or 2 years before the compliance dates, primarily because information on manufacturers’ ability to comply is not available until then. Therefore, EPA believes that it is not appropriate to consider penalties before late 2004. In the event that an individual refiner is unable to comply with the 2007 rule, EPA could grant the company relief from meeting its low-sulfur requirement in response to a request under the rule’s hardship application process. The refiner would then develop an alternative compliance plan. EPA may, in certain circumstances, determine in advance that it will not actively enforce an environmental regulation, including the 2007 rule. However, according to EPA, the agency would take this action only if it is clearly needed to serve the public interest. Typically, EPA grants requests for selective enforcement of a regulation when a weather emergency, fire, explosion, or similar circumstance outside a requester’s control makes compliance impracticable, or when compliance with the original rule would cause the regulated entities significant hardship. The consent decrees and 2007 standards are critical pieces of EPA’s strategy to control harmful diesel emissions and protect public health. While the accelerated schedule in the consent decrees had an impact on both the engine and trucking industries, it helped to further the agency’s emissions reduction goals by putting cleaner diesel engines on the road earlier than otherwise planned. The agency has also made a significant investment in developing, and ensuring the implementation of, the 2007 standards. Nevertheless, stakeholders from two critical industry groups— engine manufacturers and trucking companies—would like more help. In particular, engine manufacturers would like assurances from EPA that, once the cleaner engines are available, the trucking industry will purchase them. Furthermore, the trucking industry, as a result of its experience with the consent decrees, believes it has not been a key player with EPA in responding to the consent decrees or implementing the 2007 standards. Because the trucking industry is a major source of the emissions EPA is trying to combat, if trucking companies delay purchase of the cleaner engines, the economic effect could be more severe than what occurred as a result of the decrees and could postpone the emissions reductions. The trucking industry is also a key player in the nation’s transportation system needed to keep a healthy economy. Therefore, it is important to achieve emissions reductions while minimizing the negative economic effects on trucking and its related industries. For these reasons, EPA may want to consider what additional efforts it could take to help engine manufacturers produce clean engines in time for road testing, to reassure trucking companies that they will be able to buy tested engines on time, and to address major concerns of other key stakeholders. Careful consideration should be given to these efforts so that they will not unduly delay progress towards the standards, however. For example, EPA could consider if it has time to establish an independent expert panel, similar to its 2002 panel, to review industry’s progress in developing the necessary technologies. The panel should consist of representatives of all of the key stakeholders who would identify and address their major concerns to the extent practicable. The panel could review the data EPA has already collected or new data from the engine and fuel industries to measure the progress of technology development, communicate this to all stakeholders, and determine what, if any, additional actions, such as incentives, are needed to ensure that standards are met. The agency would have to establish the panel as soon as possible in 2004, however, if it is to have enough time to be effective and not unduly delay progress. Making more of an investment in working with all of the stakeholders critical to meeting the 2007 standards would help EPA ensure that it will achieve its goals of reduced emissions and increased public health protection. To maximize public health and air quality benefits, and minimize adverse impacts on affected industries, we recommend that the Administrator, EPA, consider additional opportunities to allay engine, fuel, and trucking industry concerns about the costs and likelihood of meeting the 2007 standards with reliable engine and fuel technology. Opportunities could include better communicating with all stakeholders on the remaining technological uncertainties. EPA could also convene another independent review panel to (a) address stakeholders’ remaining concerns; (b) assess and communicate the progress of technology development; and (c) determine what, if any, additional actions are needed to meet the 2007 standards such as considering the costs and benefits of incentives for developing and purchasing the technology on time, and other alternatives. We provided EPA with a draft of this report for review. The Assistant Administrator for Air and Radiation said EPA believes that, in many respects, our report is consistent with the agency’s assessment of the situation leading up to the implementation of the 2007 standards. However, the agency has concerns about the basis for certain of our findings on the standards. More specifically, EPA asserted that we (1) present selected stakeholders’ opinions without validating them and ignore evidence that the agency believes would prove or disprove their validity, (2) overstate the challenges to having fuel and engine technologies ready on time to meet the 2007 standards, and (3) inaccurately portray EPA’s efforts to work with stakeholders in developing the rule. As to our recommendations, EPA sees merit in using financial incentives to achieve the 2007 milestone, but does not see an agency role in this regard. Neither does the agency see a need to convene an independent technology review board. We disagree with EPA’s assertions. In our view, EPA needs to work with stakeholders to better address any remaining concerns they have about the availability of the new engines and fuel required to meet the 2007 standards. We fully appreciate that the anticipated emissions reductions are critical for many states whose air quality is in trouble, that the 2007 standards are vital to protecting public health, and that the agency and the engine, emissions control, and fuel industries have made extensive efforts to successfully implement the 2007 rule. We also recognize that to achieve the rule’s objectives, the trucking industry must purchase trucks with the new engines beginning in 2007. Otherwise, we are concerned that the nation may relive the negative effects that resulted from the 2002 consent decrees. In 2002, trucking companies pre-bought older engines before the deadline, delaying emissions and health benefits, because they believed they did not have enough time to test new engines or enough information on costs. To ensure that this does not happen with the 2007 standards, we believe EPA should strengthen its process for working with stakeholders to allay any remaining concerns about whether fuel will be available in sufficient quantities and locations, whether enough new engines will be ready in time to thoroughly test them, and how much the engines will cost to buy and operate. With respect to EPA’s specific assertions, we disagree with EPA’s opinion that we present certain stakeholders’ views without regard to their validity. We carefully and consistently collected the views of engine and emissions control manufacturers, trucking companies, fuel industry representatives, and environmental and health groups, and were equally careful to accurately present their opinions, consistent with our methodology and quality assurance standards. Furthermore, the report acknowledges that we were unable to verify opinions about the technologies’ readiness with hard data on their design and performance because the industries manufacturing the technologies were not comfortable in releasing information about their individual designs. Nevertheless, we did not simply accept stakeholders’ views at face value, but where possible, assessed the basis for their opinions, such as reviewing available studies and reports on the technologies. We also disagree with EPA’s assertion that we did not consider additional information and evidence that agency program managers provided to us late in the course of our work after reviewing a draft summary of the facts to be used in the report. At that time, EPA provided extensive written comments on the summary, along with a number of press releases from engine manufacturers and trade press articles. In response, we spent considerable time carefully assessing all of this information and made a number of changes to the report where appropriate. However, the agency did not provide any additional quantitative data or other information that would allow us to better evaluate the stakeholders’ positions. We also disagree with several EPA assertions that the report overstated the technological challenges to successfully delivering the necessary fuel and engines on time. In this regard, we devote considerable narrative to the views of the agency and all the stakeholders who share these views that both technologies are on track. However, we were obligated to acknowledge some stakeholders’ concerns over the remaining technological risks and questions. In addition, we include the most current information possible on technological developments in our report. For example, after several manufacturers announced by February 2004 their plans to have a limited number of prototype engines ready for testing in 2005, we re-contacted the trucking company representatives to determine the extent to which these announcements addressed their concerns. Additionally, we acknowledge that EPA deserves credit for its activities to work with various stakeholders to help ensure that the technologies will be ready in time and we devote considerable narrative to describing these activities in the report. We are also very careful to give a balanced presentation of the stakeholders’ opinions about EPA’s activities and therefore were obligated to acknowledge that some stakeholders questioned the agency’s openness to their concerns and willingness to address them. For example, we note in the report that EPA officials acknowledged the agency initially did not invite anyone from the trucking industry to participate on the 2002 Clean Diesel Independent Review Panel and only did so after the industry lobbied the agency. Finally, with regard to EPA’s comments on our recommendations, we want to emphasize that we are recommending that the agency consider additional steps to alleviate the remaining concerns raised by stakeholders, avoid a significant pre-buy of older engines, and better guarantee that the emissions and health benefits are achieved. We suggest actions for the agency to consider, but do not intend to limit the agency to the alternatives we suggested, especially if it could design more effective solutions. In this light, with regard to financial incentives, we recognize that the Congress must provide the agency direction and funding for such an approach, but expect that it would also look to the agency to play a role, such as making the initial proposal for incentives or helping to determine their merits and costs. As to convening an independent review panel, we do not believe that this would unduly delay the schedule for implementing the standards. In addition, we believe a panel could help address stakeholders’ remaining concerns, thereby helping to prevent a repeat of the negative impacts from the 2002 consent decrees and instead ultimately ensure that the critical emissions and health benefits anticipated from the 2007 standards are achieved in a timely manner. Appendix III contains the text of EPA’s letter along with our detailed responses to the issues raised. EPA also provided some technical comments, which we have incorporated as appropriate. We are sending copies of this report to the Chairman and Ranking Minority Member of the Senate Appropriations Committee and its Subcommittee on VA, HUD, and Independent Agencies; the Senate Committee on Environment and Public Works; the Senate Committee on Commerce, Science, and Transportation; the House Appropriations Committee and its Subcommittee on VA, HUD, and Independent Agencies; the House Committee on Energy and Commerce; the House Committee on Transportation and Infrastructure; the House Committee on Government Reform and its Subcommittee on Energy Policy, Natural Resources, and Regulatory Affairs; other interested members of Congress; the Administrator, EPA; the Director of the Office of Management and Budget; and other interested parties. We will also make copies available to others upon request. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you have any questions about this report, please contact me at (202) 512-3841. Key contributors to this report are listed in appendix IV. Our objectives in this review were to determine (1) the effects, if any, of EPA’s 1998 consent decrees with diesel engine manufacturers on trucking companies, engine manufacturers, and expected emissions reductions; (2) stakeholders’ views on industries’ ability to comply with the 2007 standards and EPA’s actions to ensure that the new engine technologies and low-sulfur fuel will be ready in time; and (3) if not, EPA’s options and plans for mitigating any potential negative effects on key industry sectors. To address the first objective, we performed econometric modeling using data on new Class 8 diesel truck production, GDP, and diesel fuel prices from January 1992 through June 2003 to determine the extent to which Class 8 truck purchases may have been associated with the consent decrees. We assessed the reliability of these data by reviewing existing information about the data as well as some testing of the truck data for obvious errors. In addition, we had discussions with the vendor concerning the reliability of the truck data. We determined that the data were sufficiently reliable for purposes of this review. Details of our methodology for this specific analysis are included in appendix II. In addition, we contacted, among others, officials of ten of the nation’s largest trucking companies as defined by the number of trucks in their fleets (see table 4). We identified these companies from data provided by the American Trucking Associations (ATA), an organization representing the majority of the trucking companies involved in freight transportation. Because ATA could not identify which of its member companies had purchased engines in the months before and immediately after October 2002, GAO and ATA agreed that the largest trucking companies, as determined by the total number of trucks in their fleets, were more likely than smaller companies to have purchased trucks during that period and, therefore, would be in the best position to recount their experience with both the new engines and the impacts of the accelerated schedule. ATA provided us with a list of 48 of their member companies with truck fleets ranging from a high of over 52,000 trucks to a low of 60 trucks. From this list, we selected those ten companies that each had fleets of over 10,000 trucks in 2002. (This 10,000- truck level provided a natural breaking point in the data, since the next largest company owned about 8,400 trucks.) These 10 companies accounted for a total of 176,000 trucks, 3 percent of the total truck inventory in 2002. Because these companies were not selected randomly, we cannot project our findings to the entire trucking industry. We asked the representatives of these companies a uniform set of questions about the companies’ strategies in reacting to the decrees, the effects of the decrees on their operations, and their experiences with the new engines designed to comply with the decrees. We also reviewed financial statements some of these companies submitted to the Securities and Exchange Commission to identify effects that the companies publicly disclosed. In addition, to determine the effects of accelerating implementation of the 2004 standards on the engine manufacturing industry, we contacted officials of the seven engine manufacturers that were subject to the consent decrees. These companies included Caterpillar Incorporated, Cummins Engine Company, Detroit Diesel Corporation, Mack Trucks Incorporated, Navistar International Transportation Corporation, Renault Vehicules Industriels, s.a., and Volvo Truck Corporation. As with the trucking companies, we asked the representatives of these engine manufacturers questions about their companies’ strategies with regard to the decrees, the decrees’ effects on their operations, and the performance of the new engines. We also reviewed some of these manufacturers’ Securities and Exchange Commission submissions. While we asked these companies for data to support their statements about the effects of the decrees, generally they said that it would be detrimental to reveal information about their business operations or technology designs because it might harm their competitive positions relative to other companies. We also did not identify any other independent analyses of the impacts of the consent decrees. To determine the air quality effects of the decrees, we reviewed EPA’s 1998 projections of the emissions reductions expected from accelerating the schedule, based on its estimate of the number of trucks that would have the new engines. We compared this to data on the actual number of trucks with new engines to assess the likelihood that EPA would achieve the expected emissions reductions. We also discussed with EPA officials and staff the basis for their estimates of the expected emissions reductions from a second provision of the consent decrees, whereby truck owners would have emission computer controls on their older engines adjusted during engine overhauls. To respond to the second objective, we contacted officials representing 16 organizations and companies from among those that offered the largest number of comments on EPA’s 2007 emissions standards when proposed in 2000 (see table 5). We identified these stakeholders by first reviewing the list of organizations/persons commenting on EPA’s proposed 2007 rule during the public comment period in 2000. EPA recorded over 700 separate comments on various issues relating to the rule. We used the number of issues on which individual organizations commented, as determined by EPA, as a proxy for the level of interest or concern by these organizations regarding EPA’s 2007 rule. From EPA’s response document, we identified over 500 separate commenters, ranging from individual citizens, local interest groups, and companies to national organizations representing major industries and environmental, health, and other interests. Using this information, we placed commenters in general categories reflecting the interests they represented, for example, the fuel industry or environmental and health interests. Within each category, we ranked the commenters based on the total number of issues on which each commented. From each category, we generally selected those commenters who addressed more than 25. This approach eliminated all but 21 of the more than 500 commenters. We then made several modifications to this list. First, we made an exception to retain the ATA, which commented on 24 issues, but which represents a large segment of the trucking industry, a key stakeholder affected by the 2007 rule. We also eliminated two commenters who represented agriculture interests, but addressed more than 25 issues because agricultural issues were not relevant to our review. Finally, we eliminated from our list most individual companies whose interests are represented by national organizations that were also on the list of contacts. We made this decision on the assumption that the national organization would reflect the concerns of the individual member companies that also commented. However, we included in our list Marathon Ashland Petroleum because of the large number of issues on which this company commented, although an organization representing its interests was also included. We also included Cummins, Incorporated; Detroit Diesel Corporation; and Navistar International Truck and Engine Corporation, three of the original seven engine manufacturers who were subject to the consent decrees, primarily because we wanted to discuss the effects of the decrees on their industry and took the opportunity to discuss issues relating to the 2007 standards as well. In addition to the 16 organizations and companies identified through this process, we also contacted representatives of the refining and distribution sectors of the fuel industry to ensure that we had a broad range of views. These sectors did not appear to be represented among the commenting stakeholders, despite their key role in implementing the 2007 rule. These organizations included the Association of Oil Pipe Lines, the Independent Fuel Terminal Operators of America, the Independent Liquid Terminals Association, the Petroleum Marketers Association of America, and the Society of Independent Gasoline Marketers of America. We asked all of these stakeholders to provide their views on whether the technologies needed to meet the 2007 standards would be available on time. We took a number of steps to try to assess the basis of support for stakeholders’ views about the readiness of technology to meet the 2007 standards. First, we asked each engine manufacturer that we contacted if the company could provide us with data to demonstrate the status of technology development. However, the representatives said that it would be detrimental to reveal information about their technology designs or business operations because it might harm their competitive positions relative to other companies. Alternatively, we evaluated the stakeholders’ positions by considering publicly available information, including studies and reports issued on the technologies and on the development of the standards. Because the representatives of the trucking companies we contacted had views about the availability, readiness, and costs of the engines for 2007 that differed from the other stakeholders, we took some additional steps to assess the basis of their views. For example, we asked the engine manufacturers and EPA officials to respond to the concerns raised by the trucking representatives, and where the manufacturers’ and agency’s views differed, we reflected them and the basis of their comments in the report for balance. We also considered the information we collected and the analyses we conducted in regard to the impacts of the 2002 consent decrees to determine if they offered any perspectives on the trucking industry’s concerns about meeting the 2007 standards. For example, we used the information showing that: (1) the industry pre-bought older engines prior to October 2002 because companies did not have engines in time to test their reliability and possible costs; (2) companies that had bought the new engines determined both the purchase price, and operations and maintenance costs, were higher than estimated and anticipated; and (3) EPA developed its estimate of what it would cost to buy and operate new engines for 2007 in 2000, before technology designs were completed and selected to assess the trucking representatives’ concerns about meeting the 2007 standards. We also used the information obtained from the engine manufacturers to assess the trucking industry’s concerns about how soon test engines would be available, such as the fact that manufacturers were 6 months behind schedule in selecting the technology they would use to meet the standards. We also asked all of the stakeholders we contacted to provide their views on EPA’s efforts to ensure that the needed engine and fuel technologies will be available by 2007. We obtained information from EPA on their activities in this regard and provided a summary of these activities to the stakeholders we contacted and asked them for their views on the effectiveness of these efforts. We also discussed with the Director of EPA’s Office of Transportation and Air Quality as well as program managers from the agency’s Office of Air and Radiation (in Washington, D.C., and Ann Arbor, Michigan), their activities to ensure timely compliance with the standards, as well as their plans if the standards cannot be implemented on schedule. We conducted our work between January 2003 and February 2004 in accordance with generally accepted government auditing standards. This appendix describes the econometric models we used to analyze the relationship between EPA’s 1998 consent decrees with diesel engine manufacturers and subsequent demand for Class 8 trucks. We used quarterly data on U.S. and Canadian production of heavy-heavy-duty diesel trucks (classified by the industry as Class 8 trucks) for the years 1992 through 2003. We also accounted for the possible effects of gross domestic product (GDP), diesel fuel prices, and seasonal factors on truck demand in our analysis. After applying standard econometric techniques to control for possible biases in our analysis, we found that there was a significant increase in Class 8 truck production, ranging from about 19,000 to 24,000 trucks, in the 6 months before October 2002, which may be associated with EPA’s consent decrees. These amounts represent 20 percent to 26 percent of the total 93,000 Class 8 diesel trucks produced in U.S. and Canadian plants during that 6-month period. To describe how EPA’s consent decrees may have affected truck demand, we defined a binary variable, CD. CD takes the value of one for the 6-month period prior to October 2002 and the value of zero otherwise. In addition, since truck demand is likely to be seasonal, related to the strength of the economy, and related to diesel fuel prices, we included these three factors in our basic model. Q=β +γ T T T ∆GDP +β DP+ β CD +ε , (1) where β , and γ are coefficients to be estimated. Q, ∆GDP, and DP denote quarterly truck production, , T are binary variables, which, like CD, take values of one for specified quarters but the value of zero otherwise. The three binary variables, T , and T is a random error, to which all standard assumptions apply; t is the index for time period. The GDP is an important indicator of the strength of the economy, which can be used by truck operators to gauge the strength of future demand for their services. We expect truck operators to purchase more trucks in response to a strong economy and vice versa, which implies a positive β in equation (1). On the other hand, we expect truck operators to delay truck purchases if diesel fuel prices are increasing, because of the importance of fuel in operating trucks. As a result, we expect β, will be positive in equation (1). =ρεt-1+ µ. The numbers for AR(1), as shown in table 7 for the analysis results represent the coefficient ρ.. Q=β +γ+γ T T +β DP+β CD +AR(1)+µ , (2) Q=β T+γ+β ∆GDP +β+β Qt-1 +AR(1)+µ , (3) Q=β +γ+γ T T +β DP+β CD +β ∆GDPt-1 +β DPt- 1+AR(1)+µ (4) Including AR(1) in models (2) through (4) allows us to account for the possible temporal correlation or autocorrelation of factors that we did not consider—for example, truck insurance premiums and used truck prices, among other factors—with GDP or fuel prices. We included Qt-1, as in model (3), because truck production in the current period is closely associated with production in previous periods. In model (4), we included the lagged GDP growth rate, ∆GDPt-1, and fuel prices, DPt-1, in the previous period because truck operators may purchase more trucks in response to strong growth rates in GDP in previous periods, and they may delay truck purchases when diesel fuel prices have been increasing in previous periods. Although EPA’s consent decrees directly affected the cost and engineering of diesel engines, data on diesel engine prices were not available. Therefore, we used data on quarterly Class-8 truck production in the United States and Canada from 1992 through June 2003. Truck production is closely tied to diesel engine production with a slight lag. In addition, for the best measurement, we intended to include only trucks produced, domestically or abroad, for U.S. domestic consumption and exclude those produced for overseas markets. However, this approach would not allow us to include in our analysis data from 1992 through 1997, because Ward’s included separate domestic and export data for Class-8 truck production in the United States and Canada only after 1997. Prior to 1998, truck production data were aggregated for both the United States and Canada. The aggregate U.S. and Canadian truck production should reflect closely the number of trucks produced, domestically or abroad, for operation in the United States because the total Canadian truck production was about one-sixth the size of total U.S. and Canada production, about three- quarters of the total Canadian production were exported to the United States, and about 86 percent of the Class-8 trucks produced in the United States are for domestic consumption (calculations based on Ward’s data on U.S. and Canadian production from 1998 through the first half of 2003). We made this adjustment in order to be consistent with BEA’s inflation adjustment for GDP at 1996 price levels. services as alternative measures for GDP. These two indicators are more closely related to truck production than GDP. Table 6 shows descriptive statistics of the three key variables used in the estimation. Table 7 presents the results of our analysis using total quarterly class-8 truck production in the United States and Canada as the dependent variable. In addition, we performed various analyses using alternative combinations and definitions of variables to test if our analysis results are sensitive to the choices of variables. is not explained by the included variables. In addition, the Durbin- Watson (DW) statistic of 0.352, which is less than the critical value of 1.019 for a sample size of 45 with 7 explanatory variables at the 1 percent significance level, suggests a strong positive autocorrelation of residuals between the current and previous periods. We controlled for the possible autocorrelation, suggested by the low DW statistic in model (1), by modeling the error term as a first-order autoregressive process, AR(1), in model (2). As shown in table 7, the adjusted R More importantly, only the coefficient of CD and the constant term are statistically significant, suggesting an increase of 20,198 (the coefficient 10,099 multiplied by 2) Class-8 trucks in the 6 months prior to EPA’s consent decrees. This increase in truck production may be associated with the decrees. For example, we substituted GDP with two other measures: GDP less consumption expenditures on services, and ATA’s tonnage index. In some analyses, we used annualized percentage change in diesel prices instead of diesel fuel prices at 1996 dollars. In addition, we experimented with different time lags. The results produced using model specifications (2) through (4) with these alternative estimates consistently showed a signifcant increase in truck production associated with EPA’s consent decrees. For example, when we re- estimated models (1) through (4) using GDP less expenditures on services, the coefficients of CD in models (1) through (4) are –5926.10, 10080.32, 11954.69, and 9381.81, respectively. The above coefficients for models (2) through (4) are statistically significant at the 5 percent level. For model (3), we added truck production in the previous period, Qt-1, to model (2) to account for the effects of truck inventories. As a result, CD’s coefficient increases. The coefficient of ∆GDP increased appreciably and becomes statistically significant. The coefficient of DP changes little and also becomes statistically significant. The coefficient of Qt-1 is positive and statistically significant, suggesting that an increase in truck production in the previous period is likely to be followed by an increase in production in the current period. The high-adjusted R statistic of 0.933 also suggests that much of the variation of Q is explained by the included variables. The DW statistic of 1.945 unambiguously suggests that including truck production in the previous period can adequately account for the autocorrelation in the error terms. In model (4), we added ∆GDP, and DP of previous periods to model (2) because they also may be good indicators of truck production in the current period. Compared to model (3), including the additional lagged variables to model (2) does not enable us to explain more of the variation in truck production as suggested by a decreasing adjusted Rstrategies, our analysis does not assess the full extent of the effects of EPA’s consent decrees on truck operators’ business operations. The following are GAO’s comments on the Environmental Protection Agency’s letter dated February 24, 2004. As a preface to addressing EPA’s specific comments on this report below, GAO wants to reiterate that it recognizes how critical the anticipated emissions reductions are for many states whose air quality is in trouble, how critical it is for the 2007 standards to succeed in order to significantly reduce emissions and protect public health, and all of the work and investment the agency and the engine, emissions control, and fuel industries have made. These critically important objectives, however, depend to a large extent on trucking companies’ decisions to buy and run the improved engines. In our view, EPA has an important window of opportunity to make some improvements in the process it is using to work with stakeholders to both ensure technology is ready and allay any remaining stakeholder concerns about the new engines and fuel. Addressing concerns about whether fuel will be available in sufficient quantities and locations and the new engines will be ready in time to test should not be overly burdensome and will help to prevent a significant pre- buy of older engines before 2007 that would delay emissions and health benefits as occurred in 2002. 1. EPA agrees that, in many respects, GAO’s report is consistent with the agency’s assessment of the situation leading up to the implementation of the 2007 standards. However, we do not agree with EPA’s assertion that we gave disproportionate weight and consideration to the views of the trucking industry which conflict with the agency’s assessment for the following reasons. First, we carefully and consistently collected the views of all stakeholders—engine and emissions control manufacturers, trucking companies, fuel industry representatives, and environmental and health groups—and were equally careful to accurately present and assess their views. Consistent with our methodology and quality assurance standards, we also did not simply accept stakeholders’ views at face value, but did where possible assess the basis for their views. For example, we determined that the trucking company representatives’ concerns about the reliability and costs of the new engines were based on the technological leap required to meet the 2007 standards; that EPA’s estimates of the new engines’ costs were developed in 2000 before engine designs were developed; and that some of the engine manufacturers and fuel industry representatives designing the technologies acknowledged that there were remaining technological risks and questions. We also carefully point out that we were unable to fully confirm some of the views and opinions of stakeholders because the industries designing new engine and fuel technology were not comfortable in releasing information about their individual designs. In addition, we reviewed reports EPA issued on the progress towards the standards, but the reports primarily represented EPA’s conclusions and did not present the specific data on which these were based. 2. We also disagree with EPA’s assertion that we did not consider additional information and evidence that agency program managers provided to GAO late in the course of our work after reviewing a draft summary of the facts to be used in the report. Throughout our review, we worked closely with the EPA program managers responsible for the 2007 standards to ensure that we clearly understood the issues and EPA’s positions, and had the most current information. In addition, to ensure the accuracy of the information in our report, at the conclusion of our work, we provided EPA program managers a summary of the factual information supporting our findings for their review. At that time, EPA provided extensive written comments on the summary, along with a number of press releases from engine manufacturers and trade press articles. However, the agency did not provide any additional quantitative data or other information that would allow us to better assess the stakeholders’ positions. We spent considerable time carefully assessing EPA’s comments and the additional information and made a number of changes to the report where appropriate. Furthermore, we extended our report time frame by 6 weeks to give EPA extra time to provide its comments and supporting information and for us to carefully assess it and respond accordingly. 3. We also disagree with several EPA assertions that the report has an overall negative tone and overstates the technological challenges to successfully deliver the necessary fuel and engines, does not clearly state engine manufacturers’ commitments to have test engines ready in time, and accepts at face value the trucking representatives’ position that having test vehicles by mid-2005 is a critical deadline. We devote considerable narrative to the views of the agency and all the stakeholders who share these views that the technologies—for both cleaner engines and low-sulfur fuel—are on track. In addition, though, we have a professional responsibility to acknowledge that some stakeholders—including some engine manufacturers and fuel distribution and trucking industry representatives—expressed concerns over the remaining technological risks and questions. As such, we accurately describe these challenges and the concerns they create. For example, EPA asserts that the report projects a negative tone with regard to the progress of the oil industry in preparing to supply low-sulfur fuel for 2007. However, we report that the fuel industry stakeholders we contacted identified a number of remaining issues that need to be resolved, none of which they considered to be insurmountable. We reviewed EPA’s summary of pre-compliance reports detailing refiners’ plans to produce low-sulfur fuel and agree that the refiners’ ability to produce the fuel does not appear to be an area of concern. However, these reports do not address the primary concerns that industry representatives raised, which relate to distribution challenges. As we make clear in the report, without trying to further alleviate these and other stakeholder concerns, the agency may not achieve its emissions and public health goals with the 2007 standards. We also took great care to include the most current information possible in our report. For example, in January 2004, we updated our report to reflect that engine companies had finally publicly announced the technologies they would use to meet the 2007 standards, although 6 months later than planned. In addition, after several of the manufacturers subsequently issued press releases in January and February 2004, stating that they expected to have at least a limited number of prototype engines ready for testing by mid-2005, we re- contacted the trucking company representatives to determine the extent to which these announcements addressed their concerns, and updated the report accordingly. Additionally, GAO does report the trucking representatives’ position that they need to have prototypes by about mid-2005, as well as the basis for their position, which is to (a) determine engine reliability in all seasons and weather conditions and for long enough periods to determine the resulting operating and maintenance impacts, and (b) subsequently develop their acquisition strategies based on this information. These arguments seem plausible. However, more importantly, we report their position because some of the representatives said that without enough testing time, companies were already considering whether to pre-buy older engines before 2007, in larger quantities than they did for 2002, further jeopardizing emissions and health benefits. We believe that this is the important concern EPA needed to be aware of and try to mitigate. We did not attempt to confirm the validity of the 18-24 month testing time frame representatives said they needed for the 2007 standards with the industry’s historical time frames to test upgraded engines. In part, we did not because the engine designs for 2007 are a technological leap over current equipment and may require longer lead times to develop. Similarly, they may need longer lead times for testing. 4. We agree with EPA’s concern about clearly distinguishing the 2002 consent decrees and 2007 standards, and made changes to the report as a result. The engine requirements established in the consent decrees were done as part of a legal settlement in response to an enforcement action, not through a public rulemaking process where all stakeholders had input into establishing the requirements. In addition, the engine companies had a relatively small amount of lead time to design the new engines because as part of the settlement, manufacturers agreed to accelerate the schedule for new engines by 15 months. In contrast, the 2007 standards were developed through a more extensive public rulemaking process with wide participation from all stakeholders, and manufacturers and fuel refiners had about 6 years lead time to develop the needed emissions control, engine, and fuel technologies. We disagree with EPA, however, that these two actions are not comparable in any respect. Whether new engines are being designed in response to an enforcement action or rulemaking, the industry’s market reaction to the consent decrees may offer some lessons learned that EPA could incorporate into its process for implementing the 2007 standards. 5. We agree that EPA deserves credit for the large number of voluntary activities it has undertaken to work with various stakeholders to help ensure that the technology will be ready in time and devote considerable narrative to describing these activities in the report. We were also very careful to present a balanced view of the stakeholders’ opinions about the agency’s activities. For this reason, we were obligated to acknowledge that some of the engine manufacturers and trucking representatives raised questions about the agency’s openness to their concerns and willingness to address them. EPA maintains that the agency had extensive involvement with stakeholders—including the trucking industry—in developing the 2007 rule. This is true. However, the trucking industry’s concerns are not with the 2000 rulemaking process, but with the process EPA has used since then to involve stakeholders in implementing the standards. For example, as we note in the report, EPA officials acknowledged that the agency initially did not invite anyone from the trucking industry to participate on the 2002 Clean Diesel Independent Review Panel and only did so after the industry lobbied the agency. 6. As to GAO’s recommendations, EPA agrees with the merits of providing financial incentives—although the agency does not see a role for itself in this action—and disagrees with the merits of convening an independent panel. We want to clarify that GAO is recommending that the agency consider additional steps to alleviate existing concerns, avoid a significant pre-buy of older engines, and better guarantee that the emissions and health benefits are achieved. We thereby offer several alternative actions for the agency to consider, but do not intend to limit the agency in any way to these alternatives or suggest that they are the only effective means to resolve concerns. That said, with regard to the suggestion of using financial incentives, we recognize that the Congress must provide the agency direction and funding for such an approach, but expect that it would also look to the agency to play a role, such as submitting a proposal for incentives or at least helping to determine their merits and costs. As to convening an independent review panel, we appreciate EPA’s concerns that this could unnecessarily delay the schedule for implementing the standards, and the agency is in the best position to determine this. But, if EPA has the necessary evidence available to demonstrate technologies are ready as it contends it does, it should not be difficult or take considerable time for an independent body to review the data and validate this conclusion for all affected stakeholders. Otherwise, if the trucking industry remains concerned and pre-buys older engines prior to 2007, this will in effect delay implementation of the standards and their anticipated benefits. In addition to the individuals named above, Charles W. Bausell, Jr., Tyra DiPalma-Vigil, Richard Frankel, Terence Lam, and Eugene Wisnoski made key contributions to this report. Important contributions were also made by Nancy Crothers and Amy Webbink. The General Accounting Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to e-mail alerts” under the “Order GAO Products” heading.
Diesel engine emissions pose health risks, but one major source--heavy-duty diesel vehicles--is critical for our economy. To reduce risks, the Environmental Protection Agency (EPA) has set stringent emissions standards for diesel engines. In 1998, EPA found that some engine makers were violating standards, so they agreed to build engines that meet 2004 standards early, by October 2002. EPA has set even more stringent standards for 2007. GAO was asked to (1) assess the October 2002 deadline's effects on industry and emissions, and (2) obtain stakeholders' views on the readiness of technology for the 2007 standards and EPA's efforts to ensure this. GAO analyzed information from EPA, 10 large trucking companies, the engine makers subject to the early deadline, and other stakeholders. Implementing the 2004 diesel emissions standards 15 months early disrupted some industries' operations but also helped reduce pollution earlier. More specifically, because some manufacturers had to build new engines sooner than planned, most could not provide trucking companies with prototype engines early enough to test. Concerned that the new engines would be costly and unreliable, some of the companies said they bought more trucks with old engines than planned before October 2002. Our analysis of truck production and financial data also shows this surge. This adversely affected some companies' operations and profits. To meet the increased demand for trucks with old engines, some manufacturers reported that they ramped up production of such engines before October. But when demand subsequently dropped, they had to decrease production and release workers, reducing profits and disrupting operations, at least until demand increased later in 2003. Manufacturers of the new engines also continued to lose market share to manufacturers that either did not have to meet the early date, or that did but chose not to, paying penalties instead. While accelerating the schedule for new engines affected some industries, it accelerated emissions benefits, although not to the extent or in the time frames anticipated. For example, EPA roughly estimated that its agreements with engine manufacturers that violated standards would reduce nitrogen oxide emissions by about 4 million tons over the life of the engines. But because companies initially bought more trucks with old engines and owners are now operating trucks longer, some of the expected emissions reductions will be delayed. As for the 2007 standards, EPA has taken a number of steps to aid the transition to the new diesel engines and fuel, but some stakeholders would like more help. Most engine, emissions control, and fuel industry representatives said the needed technologies will be ready on time; but other engine, trucking, and fuel representatives have concerns and would like more help to ensure that the technology will be available. For example, manufacturers plan to have limited numbers of prototype engines ready for a few fleets to test by mid- to late-2005-- trucking companies say they need new engines 18 to 24 months before the 2007 deadline to test the engines in all weather conditions and to develop their longterm purchasing plans. Some companies, however, are concerned that providing test engines to only a few fleets may not provide the industry as a whole with sufficient information to judge the engines' performance. In addition, they are still concerned that the new engines may be too costly and much less fuelefficient. As a result, they expect companies will again buy more trucks with old engines before the deadline, disrupting industry operations and emissions benefits. The fuel industry representatives said they can produce the low-sulfur fuel the new engines require on time and see no reason to delay the standards. Nevertheless, they worry the fuel initially may not be available nationwide and it may be difficult not to contaminate it with other fuels in the distribution system. Environmental and health groups do not want to delay the standards or the expected emissions benefits. Some stakeholders would like more information on technological progress. In addition, they would like more reassurance--such as from an independent review panel--that the technology will be ready on time and additional assistance--such as economic incentives--to encourage timely purchases of trucks with the new technologies.
To review the OIG’s audit oversight coverage of NASA, we obtained the 71 final reports from the Office of Audits as reported in the OIG’s semiannual reports to the Congress for fiscal years 2006 and 2007, which included 46 audits with statements of compliance with Government Auditing Standards and 25 reports without reference to compliance with auditing standards. For purposes of our review, we considered only those reports that stated compliance with Government Auditing Standards as audit reports and refer to the reports without such statements as nonaudit reports. We compared the contents of the 46 audit reports with the high- risk areas designated by us and with the management challenges identified by the NASA OIG to determine the audit coverage of these areas. We also analyzed the nature and scope of all 71 final reports and the resulting recommendations to determine the extent to which they addressed compliance with laws, regulations, and NASA policies and procedures; economy and efficiency; or the effectiveness of NASA’s programs and operations. To review the investigative coverage, we used the identification of closed cases reported by the OIG in semiannual reports for fiscal years 2006 and 2007. We also obtained the OIG’s strategic and annual audit plans covering the same 2-year period to determine if they contained goals and objectives to provide audit coverage of NASA’s program compliance with laws and regulations and program economy, efficiency, and effectiveness. We identified monetary and other audit and investigative accomplishments reported by the NASA OIG in semiannual reports to the Congress for fiscal years 2003 through 2007 in order to observe any long-term trends. We limited our review of the NASA OIG’s accomplishments to the results of audits and investigations reported to the Congress for this period and did not audit or otherwise verify the dollar amounts of the monetary accomplishments or potential savings to the government reported by the NASA OIG. We also obtained the semiannual reports issued by all 30 IGs appointed by the President and confirmed by the Senate to obtain the monetary accomplishments reported by those IGs during fiscal year 2007. We obtained the total budgetary resources of each OIG for fiscal year 2007 from the Office of Management and Budget (OMB) and compared the reported monetary accomplishments with budgetary resources to obtain a return on investment for each IG office. We obtained the total budgetary resources at the NASA OIG and the agency for fiscal years 2003 through 2007 from OMB in order to observe any long-range budgetary trends. We obtained additional information on staffing levels, resource distribution, and attrition rates from the OIG to identify staffing trends over this period. The attrition rates for NASA overall were verified by NASA management officials. We compared the total budgetary resources for fiscal year 2007 of the 30 IGs appointed by the President and confirmed by the Senate with the total budgetary resources of their respective agencies for the same year. We calculated a ratio for each OIG’s budget information as a percentage of its respective agency’s budget for comparative purposes. We obtained reports from the external reviews of the NASA OIG completed during fiscal years 2003 through 2007 to observe any long-term trends in OIG quality for both audits and investigations. Specifically, we obtained the audit peer review report of audit quality dated January 8, 2004, completed by the Department of Justice OIG, and the March 13, 2007, peer review report completed by the General Services Administration OIG. We also obtained the July 8, 2005, peer review report of the NASA OIG’s investigative quality completed by the Department of Transportation OIG. In addition, we obtained the report of investigation completed by the Integrity Committee of PCIE and ECIE, which addressed allegations of the NASA IG’s misconduct and appearance of a lack of independence. This investigative report was released in late March 2007 to the House Committee on Science and Technology, which has oversight responsibilities for scientific research and development at NASA and other nondefense agencies. We discussed the disposition of the investigation with the Integrity Committee. We met with the NASA IG and senior OIG staff at the beginning of our review regarding our scope and methodology. We conducted a series of interviews coordinated through the IG’s Executive Officer which included the Deputy Inspector General, the Counsel to the IG, the Assistant IG for Audits, the Assistant IG for Investigations, and the Assistant IG for Management and Planning. At the completion of our work we met with the NASA IG and senior OIG staff to discuss our report findings, conclusions, and recommendations. We conducted this performance audit from November 2007 through December 2008 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. NASA was established by the National Aeronautics and Space Act of 1958 to provide research into problems of flight within and outside Earth’s atmosphere and to ensure that the United States conducts activities in space devoted to peaceful purposes for the benefit of mankind. On January 14, 2004, the President announced a new vision for space exploration endorsed by the Congress in the NASA Authorization Act of 2005 which includes a journey of exploring the solar system, returning astronauts to the moon in the next decade, and venturing to Mars and beyond. NASA comprises the Headquarters in Washington, D.C., nine field centers located throughout the country, and the Jet Propulsion Laboratory (JPL) operated for NASA by the California Institute of Technology. The NASA centers and JPL conduct NASA’s programs in exploration, discovery, and research and are led by four mission directorates at NASA Headquarters. (See table 1.) The NASA OIG was established by the IG Act to provide an independent office within NASA to conduct and supervise audits and investigations; provide leadership and coordination and recommend policies to promote economy, efficiency, and effectiveness; and prevent and detect waste, fraud, abuse, and mismanagement. The IG Act provides protections to IGs’ organizational independence through key provisions that require specified IGs, including the NASA IG, to be appointed by the President with the advice and consent of the Senate. This appointment is required to be without regard to political affiliation and is to be based solely on an assessment of the candidate’s integrity and demonstrated ability. Such presidentially appointed IGs can only be removed from office by the President who must communicate the reasons for removal to both houses of the Congress. The current NASA IG was appointed by the President on April 16, 2002, after Senate confirmation. In addition to the IG, the Deputy IG, and the Executive Officer, the OIG is organized into four offices to provide oversight of NASA, as shown in table 2. As a presidentially appointed IG, the NASA IG is a member of the PCIE, which together with the ECIE, operates a joint Integrity Committee that is empowered to investigate allegations of wrongdoing against IGs and certain members of their staff. The Inspector General Reform At of 2008, enacted on October 14, 2008, authorizes a new statutory Council of the Inspector General on Integrity and Efficiency which is to have its own Integrity Committee with powers similar to the PCIE and ECIE Integrity Committee and disestablishes the PCIE and ECIE effective on the earlier of the creation of the new Council, or 180 days after the passage of the Act. As of the date of this report the new Council has not yet been established, and the PCIE, ECIE, and their Integrity Committee continue operation. Since 1990, we have periodically reported on government operations that we have designated as high risk because of their greater vulnerabilities to fraud, waste, abuse, and mismanagement as well as challenges to economy, efficiency, or effectiveness. In January 2007, we identified 27 high-risk areas across the federal government. These included high-risk areas applicable to NASA that had been reported in prior high-risk reports. We specifically identified NASA’s contract management as a high-risk area because of weaknesses in NASA’s integrated financial management system. For example, we have reported that NASA’s contractor cost reporting process does not provide cost information that program managers and cost estimators need to develop credible estimates and compare budgeted and actual cost with the work performed. Also, NASA has lacked a modern financial management system to provide accurate and reliable information on contract spending and placed little emphasis on product performance, cost controls, and program outcomes. On a governmentwide basis, we also identified protecting the federal government’s information systems and strategic human capital management across the executive branch as high-risk areas. Beginning in 1997, the IGs were asked by congressional leaders to identify the 10 most serious management problems in their respective agencies. The request began a yearly process that continues in response to requirements established in the Reports Consolidation Act of 2000. This act calls for executive agencies, including NASA, to report their IGs’ lists of significant management challenges in their annual performance and accountability reports to the President, OMB, and the Congress. In fiscal years 2006 and 2007, the NASA OIG identified management challenges that included areas also identified in our high-risk reports and in the additional areas of financial management, space operations and exploration, and safety and security. The OIG has identified NASA’s Integrated Enterprise Management Program as key to improving NASA’s ability to provide reliable information to management, support compliance with federal requirements, and strengthen the internal control program to address continuing problems, such as NASA’s internal control weaknesses over property, plant, and equipment and materials. Regarding space operations and exploration, the OIG has identified the transition from the space shuttle to the next generation of space vehicles as a management challenge as NASA balances schedule and resource constraints while maintaining the capabilities required to fly the space shuttle and simultaneously developing the next generation of space vehicles. In the area of safety and security the OIG has identified NASA’s need to manage risk, safety, and mission assurance controls to ensure reliable operations in the context of aggressive launch and mission schedules, funding limitations, and future uncertainties as management challenges. The IG Act requires independent IG offices to provide leadership on issues of economy and efficiency and to report on the effectiveness of programs, offices, and activities within their respective agencies. The NASA OIG’s Office of Audits provides financial and performance audits and other reviews to examine NASA’s operations. The NASA OIG has conducted audit activity in most high-risk areas identified by us and the management challenges identified by the OIG for fiscal years 2006 and 2007. In addition to audits, the NASA OIG reported closing 153 investigative cases during fiscal years 2006 and 2007 in response to allegations of fraud, waste, and abuse. In providing audit coverage, the NASA OIG has generally not focused on audits with recommendations for improving the economy and efficiency of NASA’s programs and operations with potential monetary savings. For example, during fiscal years 2006 and 2007 the OIG had one audit with recommendations with potential monetary savings. During the 5-year period of fiscal years 2003 through 2007, 99 percent of NASA OIG’s dollar accomplishments came from investigations with 88 percent coming from two joint investigations with other OIGs. The remaining 1 percent of the monetary accomplishments reported by the NASA OIG during this 5-year period was from audits. We believe that improvements to the OIG’s strategic and annual audit planning could help to ensure that audits with an emphasis on NASA’s economy and efficiency through potential cost savings are included in its annual audit activities. Over fiscal years 2006 and 2007 the NASA OIG’s Office of Audits reported having completed 71 reports. Of these, the NASA OIG issued 13 audit reports in fiscal year 2006 and 20 audit reports in fiscal year 2007 on high- risk areas identified by us, and on NASA’s management challenges identified by the OIG. As shown in table 3, multiple NASA OIG audit reports were completed in most of the areas designated as high risk and as management challenges with the exception of asset management and human capital. Most of the OIG’s reports were in the areas of information technology security, contract management, and financial management. In contrast, the area of asset management had one report, and there were no audits of human capital issues even though these areas are both among GAO high-risk areas and NASA’s management challenges. (The OIG is currently auditing an issue of asset management and has plans to address an issue of NASA’s human capital.) In addition, the NASA OIG’s audit reports also addressed areas not identified as high-risk areas or management challenges. These included quality control reviews of the audits of federal award recipients by nonfederal auditors to ensure that these audits are performed in compliance with government auditing standards. In addition, while the OIG’s audit policy is to complete audits using Government Auditing Standards and the IG Act requires that all NASA OIG audits be completed using these standards, 25, or approximately 35 percent, of the OIG’s 71 reports completed by the NASA OIG Office of Audits were completed without using these standards. Those reports included transmittal letters and information without a statement of compliance with auditing standards. Consequently, we did not consider these reports as part of the OIG’s audit coverage for high-risk areas and management challenges. In addition to audits, the NASA OIG reported closing 153 investigative cases during fiscal years 2006 and 2007 in response to allegations of fraud, waste, and abuse. The OIG’s Office of Investigations investigates allegations of crime, cybercrime, fraud, waste, abuse, and misconduct that could affect NASA’s programs, projects, operations, and resources. The Office of Investigations refers its findings either to the Department of Justice for criminal prosecution and civil litigation or to NASA management for administrative action. In addition, the Office of Investigations identifies crime indicators and recommends measures for NASA management that are designed to reduce NASA’s vulnerability to criminal activity. The OIG’s closed cases focused on NASA procurements or procurement activities and investigations of computer crimes. (See fig. 1.) In addition, there were investigations of conflicts of interest, large-scale thefts of government property, and false statements. Other investigations included safety, state and local crimes, travel card fraud, and drug abuse. Statutory OIGs subject to the IG Act, including the NASA OIG, are required to report the monetary value of certain findings and recommendations in their semiannual reports provided by the OIGs through their agency heads to the Congress. As required, the NASA OIG’s semiannual reports for fiscal years 2003 through 2007 included the number of audit reports issued and the questioned costs, unsupported costs, and funds to be put to better use identified by the OIG’s audits. As defined by the IG Act, questioned costs include either alleged violations of laws, regulations, contracts, grants, or agreements; costs not supported by adequate documentation; or the expenditure of funds for an intended purpose that was unnecessary or unreasonable. In addition, unsupported costs are defined as costs that do not have adequate documentation, and funds to be put to better use are inefficiencies identified by the OIG in the use of agency funds. These are often potential savings to the government. The monetary accomplishments of the NASA OIG’s Office of Investigations are largely from closed investigations that result in recoveries of federal dollars which include fines and court ordered restitutions regarding individuals and contractors who have defrauded the government. As shown in table 4, almost all of the NASA OIG’s monetary accomplishments have come from investigations during fiscal years 2003 through 2007. In fiscal year 2006 the OIG reported the results of a joint investigation with the Department of Defense and Department of Justice OIGs that had total recoveries of $615 million from a settlement with the Boeing Company regarding criminal and civil allegations. Also, in fiscal year 2003 the OIG reported another joint investigation with recoveries of about $111 million. The results of these two investigations alone account for 88 percent of the NASA OIG’s reported total monetary accomplishments of over $824 million from both audits and investigations over fiscal years 2003 through 2007. The total monetary accomplishments from OIG investigations for this period were $815 million, or 99 percent of all reported OIG monetary accomplishments. In contrast, over the same 5- year period the OIG’s potential audit savings contributed about $9 million or about 1 percent of the OIG’s total reported 5-year monetary accomplishments, with one audit in fiscal year 2007 responsible for $7 million of this amount and another audit in fiscal year 2004 responsible for about $1.5 million. Therefore, approximately 94 percent of all NASA OIG audit monetary accomplishments reported over the 5-year period came from the results of two audits. In addition, during the 1-1/2-year period from April 1, 2004, through September 30, 2005, the OIG reported no monetary accomplishments from its audit activity. A comparison of the OIG’s fiscal year 2007 total budgetary resources of $34 million to its reported combined monetary accomplishments for that year results in a return of $0.36 for each budget dollar. When this same calculation is made based on the monetary accomplishments reported by all 30 OIGs with IGs appointed by the President and confirmed by the Senate, the overall average return on their total budgetary resources in fiscal year 2007 was $9.49 for every dollar spent by the government for their offices, or almost 26 times that of the NASA OIG for fiscal year 2007. In addition, when compared to these other OIGs, the year that the NASA OIG had its largest monetary accomplishment from audits it ranked 27 out of the 28 OIG offices reporting monetary accomplishments for fiscal year 2007. (See app. I.) Of the 71 reports completed by the NASA OIG’s Office of Audits over fiscal years 2006 and 2007, 70 did not include recommendations that address the economy and efficiency of NASA’s programs and operations with potential cost savings. The one exception to this was an OIG audit that addressed an area of NASA’s economy and efficiency and resulted in about $7 million in reported potential monetary savings. The remaining 70 reports included recommendations for improving compliance with laws, regulations, and NASA policies and procedures; internal controls; and addressed specific areas of NASA’s operations. Nevertheless, these recommendations did not provide measurable improvements to the costs and resources used to achieve program results. To illustrate, in fiscal year 2006 the NASA OIG audited the awards of subcontracts worth $4.6 billion for NASA’s space flight operations. The OIG found that the primary government contractor’s actions had complied with requirements for competition, quality assurance, and other procurement regulations, but also found examples of inadequate pricing determinations. The report recommended that the NASA contracting officer ensure compliance with contract agreements and procurement regulations but did not include recommendations to help ensure that this area will be effective or efficient in the future and did not identify any measurable cost savings to the government resulting from inadequate pricing. In addition, over the 2-year period we reviewed there were no OIG audits with recommendations to increase the economy and efficiency of NASA’s space flight operations with identified cost savings even though the IG had identified this program as one of NASA’s management challenges. The OIG’s annual audit plan addresses NASA’s programs in high-risk areas and management challenges but does not have a strategy to deal with economy and efficiency associated with these NASA programs. The OIG’s strategic plan and annual audit plans do not identify goals and audit objectives related to evaluating NASA’s programs and operations through economy and efficiency audits. The OIG’s annual audit plans for fiscal years 2006 and 2007 provided details on the objectives of each individual audit; however, similar to the results that we found for the OIG’s audits, the objectives of the audits in these plans were not directed at audits that might result in measurable cost savings. A subsequent revision of the fiscal year 2007 audit plan also had no specific objectives for addressing NASA’s economy and efficiency. In addition, through limited scope audits of compliance as well as investigations, the OIG addresses allegations received. To illustrate, OIG auditors and investigators are often assigned reviews of allegations or other assignments received from the OIG’s Senior Staff Referral Review Committee (SSRRC). The SSRRC was established by the NASA IG in the fall of 2005 to act as a clearinghouse for allegations and to review all work planned for OIG staff. The SSRRC is composed of the Assistant IG for Investigations, the Deputy Assistant IG for Audits, the OIG Counsel, and the IG’s Executive Officer. The SSRRC meets once a week to coordinate audit and investigative assignments, review fraud hotline information, review letters with allegations, and decide on where to assign the work. Generally, if the issues involve wrongdoing by NASA employees or contract fraud the OIG investigators will handle the cases. The OIG auditors are generally assigned limited scope procurement issues and issues that involve violations of NASA regulations. Issues involving standards of conduct or personnel matters will generally be referred to NASA management. The NASA OIG’s limited monetary accomplishments from its audit activity can be attributed to (1) the lack of emphasis in its annual audit plan on goals and objectives for areas to improve economy and efficiency of NASA’s programs and operations and (2) the OIG’s focus on reviews of allegations and limited scope issues in a reactive approach to audit planning through assignments from the SSRRC, which can encroach on the ability to assign staff needed for other performance audits that can address potential dollar savings. We believe that the OIG can improve its audit plans by providing more specific attention to performance audits that address the economy and efficiency of NASA’s programs and operations, and that the OIG should consult with an objective, knowledgeable outside party with experience in these types of audits when completing these plans. From fiscal year 2003 through fiscal year 2007, the NASA OIG’s total budgetary resources increased by approximately 17 percent, from approximately $29 million to $34 million in constant dollars, while the FTEs increased 4 percent, from 191 to 199. Of the 199 FTEs at the end of fiscal year 2007, 47 percent were in the Office of Audits, 37 percent in the Office of Investigations, 10 percent in the Office of Management and Planning, and 4 percent and 2 percent, respectively, for the Counsel to the IG and the IG’s immediate office. (See fig. 2.) A comparison of NASA OIG’s total budgetary resources with NASA’s total budgetary resources shows that the OIG’s budget as a percentage of NASA’s budget has increased. In addition, NASA OIG’s staffing levels have increased while NASA’s staffing level has decreased. During fiscal years 2003 through 2007, NASA’s overall total budgetary resources increased by about 4 percent, compared with the OIG’s budgetary resources, which increased by about 17 percent. Therefore, the NASA OIG’s total budgetary resources as a percentage of NASA’s total budgetary resources increased from 0.15 percent to 0.17 percent. (See table 5.) During that same period, NASA’s FTEs decreased by approximately 2.7 percent, compared with the OIG’s FTE increase of about 4 percent. When NASA OIG’s budget-to-agency-budget ratio is compared to this same ratio for other OIGs in which the IG is appointed by the President with Senate confirmation, the percentages vary depending on the size of the federal agencies, their missions, and the oversight issues emphasized by each OIG. Such a comparison for fiscal year 2007 budgets indicates that the ratio of the NASA OIG’s total budgetary resources to the total budgetary resources for NASA was within the range of these percentages for other OIGs and their agencies. Specifically, the comparison of these other OIGs’ budgets with those of their agencies ranged from 0.005 percent to 1.10 percent, and the NASA OIG’s percentage of NASA resources was at 0.17 percent, which ranks 11th of these 30 agencies. (See app. II.) Regarding staffing levels, we obtained the attrition rates for the NASA OIG for fiscal years 2003 through 2007. Attrition is the percentage of personnel losses for all reasons during the fiscal year, and is measured by comparing personnel losses during the year to the total personnel strength on board at the beginning of the year. The staff attrition rate for NASA OIG has increased over the 5-year period from 12.4 percent in 2003 to 19.9 percent in 2007. Specifically, the NASA OIG had losses of 24 personnel in fiscal year 2003 compared to a loss of 40 personnel in fiscal year 2007, an increase of approximately 67 percent. (See table 6.) As a comparison, the overall attrition rate for NASA was about 5 percent in both fiscal years 2006 and 2007. From fiscal years 2003 through 2007, the NASA OIG lost 157 staff. These losses affect the ability of the OIG to maintain experienced audit personnel. To illustrate this effect on the Office of Audits, we compared the audit staff on board in January 2003, shortly after the current IG took office, to the audit staff on board in March 2008. Of the 78 auditors on board in January 2003, 42 auditors have left the OIG audit directorate, including 9 of the 10 management-level auditors. Those leaving included all but one of the audit directors, the Assistant IG for Audits, and 2 deputy assistant IGs for audits. We did not review the reasons for the OIG’s employee turnover but believe that the OIG would benefit from a review by an objective third-party expert to address the reasons for the relatively high attrition rate as compared to the overall rate for NASA. Over the 5-year period of fiscal years 2003 through 2007, NASA OIG had three routine external peer reviews—two reviews of its auditing practice and one review of its investigative practice. The NASA OIG also had a nonroutine external review performed by the Integrity Committee of PCIE and ECIE completed in fiscal year 2007 as a result of concerns about the management practices and conduct of NASA’s IG. Government Auditing Standards requires audit organizations that perform audits in accordance with the standards to have external peer reviews on a routine basis, at least once every 3 years. Those reviews are to be performed by reviewers independent of the audit organization. In the federal IG community, other federal IGs perform these peer reviews. The purpose of the peer review is to conclude whether the audit organization has a system of quality control that is suitably designed and implemented in order to provide reasonable assurance of conforming to applicable professional standards. In addition, for investigations, the Homeland Security Act of 2002 amended the IG Act to require that each OIG with investigative or law enforcement authority under the act have its investigative function reviewed periodically by another IG office. For peer reviews of the audit practices, the external reviewers concluded that NASA OIG’s system of quality control for the audit function provided reasonable assurance of material compliance with professional auditing standards. The peer review of the NASA OIG’s investigative function concluded that the system of internal safeguards and management procedures for the Office of Investigations was in full compliance with the quality standards established by PCIE and ECIE and the Attorney General’s investigation guidelines. The NASA OIG also had a nonroutine external review completed in fiscal year 2007 as a result of serious concerns that had been raised about the management practices and conduct of the IG. At the request of the Integrity Committee of PCIE and ECIE, the Department of Housing and Urban Development’s (HUD) OIG conducted an investigation into the allegations of possible misconduct by the NASA IG. The Integrity Committee initiated the investigation through a request letter to the HUD OIG dated January 6, 2006, and forwarded 18 complaints with 79 separate allegations regarding actions of the NASA IG to the HUD OIG investigators. The HUD OIG submitted the results of its investigation for the Integrity Committee’s consideration on August 30, 2006. In a January 22, 2007, letter to the OMB Deputy Director for Management who serves as the Chair of both PCIE and ECIE, the Integrity Committee concluded that (1) the NASA IG had engaged in abuse of authority by creating an abusive work environment and (2) the NASA IG’s actions in two instances created an appearance of a lack of independence. In addition, the Integrity Committee stated that the IG had sought to develop and maintain a close relationship with the former NASA Administrator and that this effort contributed to an appearance that his independence was being compromised. However, the Integrity Committee offered no recommendations for corrective actions in their letter. Executive Order 12993 entitled Administrative Allegations Against Inspectors General provides guidance to address investigations of alleged IG wrongdoing. Under this guidance the Integrity Committee is responsible for deciding whether the investigative report prepared at its request establishes any administrative misconduct within its oversight jurisdiction. If in the Integrity Committee’s opinion the report establishes such issues or otherwise requires action, the report is referred to the Chair of PCIE and ECIE with recommendations for appropriate action. The Integrity Committee advised us that they had not believed it necessary to include specific recommendations in this case due to the extent of the findings and the presumption that the Chair of PCIE and ECIE would take disciplinary action commensurate with these findings. In accordance with the Executive Order, the Chair of PCIE and ECIE advised the NASA Administrator to determine the appropriate actions to address the investigation’s conclusions. The NASA Administrator proposed to the Chair that the NASA IG attend the Federal Executive Institute to develop a leadership and management training plan, attend at least one management/leadership program annually, obtain the services of an executive coach, and meet with the Deputy NASA Administrator on a bimonthly basis to discuss implementation of the leadership and management plan as well as the NASA IG’s professional growth. The NASA Administrator also stated that the proposed actions would resolve any concerns he had after reviewing the Integrity Committee’s report of investigation. Reacting to the NASA Administrator’s response, the Integrity Committee expressed its view in a March 20, 2007 letter to the Chair of PCIE and ECIE that the proposed actions were inadequate to address the investigation’s conclusions. Specifically, the Integrity Committee stated that “ll members of the committee believed the proposed course of action recommended by the Administrator of NASA was inadequate to address the conduct of . All members of the committee further believed that disciplinary action up to and including removal could be appropriate.” In a follow-up letter dated March 29, 2007, the NASA Administrator reaffirmed his belief that his proposed actions were adequate. With respect to the appearance of a lack of impartiality he stated that he and the IG had a professional arms-length relationship and that he did not believe that additional corrective measures were necessary. In a letter also dated March 29, 2007, the Chair of PCIE and ECIE asked the Integrity Committee for confirmation on several matters including that its members (1) had not concluded that the IG had broken any laws or acted illegally; (2) had no uniform view on what actions would be appropriate to address its concerns regarding the IG; (3) that it was not now recommending removal of the IG as a disciplinary action; and (4) that the January 22, 2007, letter to the PCIE and ECIE Chair had not contained recommendations on this matter. That same day, the Chair of the Integrity Committee confirmed that the PCIE and ECIE Chair’s understanding accurately reflected the intent of the Integrity Committee. In accordance with the discretion afforded in the Executive Order and the related implementing guidance, on April 18, 2007, the Chair of PCIE and ECIE advised the Chair of the Integrity Committee to consider the actions in the NASA Administrator’s March 29, 2007, letter as constituting the final disposition of the investigation. In line with the Executive Order, the Integrity Committee informed the NASA IG that their review was complete and that the case is considered closed. Notwithstanding the formal process outlined by the Executive Order, the Integrity Committee confirmed in a written response to our questions, its continued concern that the actions taken regarding the appearance of a lack of independence were insufficient. In the same response, the Integrity Committee stated that the views expressed in its March 20, 2007, letter remain unchanged and that the NASA IG’s lack of an appearance of independence was not resolved by the actions proposed by the NASA Administrator. In late March 2007, both the Chairman of the Subcommittee on Space, Aeronautics, and Related Matters, Senate Committee on Commerce, Science and Transportation, and the Chairman of the Subcommittee on Investigations and Oversight, House Committee on Science and Technology, received a copy of the Integrity Committee’s report of investigation. In their letter of April 2, 2007, to the President of the United States, the Chairmen requested that the President remove the NASA IG from office based on the results of the investigation. The letter states that the committees and the public are not receiving useful assistance from the NASA IG, one of their primary tools for oversight, and that the NASA IG can no longer be effective in his office and should be replaced immediately. In prepared testimony on June 7, 2007, before a joint hearing between the Subcommittee on Space, Aeronautics, and Related Sciences, Senate Committee on Commerce, Science and Transportation, and the Subcommittee on Investigations and Oversight, House Committee on Science and Technology, the NASA IG disputed the findings of the Integrity Committee investigation by calling the allegations unjustified and the investigation flawed. The IG pointed out his views regarding possible mistakes by the investigators, and provided arguments to explain his actions regarding many of the allegations investigated. In this joint hearing members of both the House and the Senate called for the IG to resign. Independence is the cornerstone of professional auditing. The IG Act requires that IGs comply with Government Auditing Standards, which specifies that auditors and audit organizations be free from personal, external, and organizational impairments and avoid the appearance of such impairments to independence. Auditors and audit organizations must maintain independence so that their opinions, findings, conclusions, judgments, and recommendations will be impartial and, just as important, viewed as impartial by objective third parties with knowledge of the relevant information. Quality Standards for Federal Offices of Inspector General issued by PCIE and ECIE include requirements for IGs to be objective with an obligation to be impartial, intellectually honest, and free of conflicts of interest. Independence is considered by these standards to be a critical element of objectivity, and without independence both in fact and in appearance, objectivity is impaired. As noted above, the absence of actions to address the perceived lack of independence can perpetuate concerns regarding the IG’s objectivity in dealing with IG responsibilities related to audits and investigations. Given the importance of IG independence both in fact and appearance and the lack of any corrective actions to fully resolve this matter, we believe that additional follow up and recommendations by the Integrity Committee are warranted related to its investigative finding dealing with the NASA IG’s appearance of a lack of independence. The fundamental mission of the NASA OIG includes providing independent and objective oversight of NASA to identify areas for improved economy, efficiency, and effectiveness, and to detect and prevent fraud, waste, and abuse. While the OIG has conducted audits in areas of high risk and management challenges and provided the results of investigations, the OIG’s monetary accomplishments from its audit activities have been limited by a lack of audits to evaluate the economy and efficiency of NASA’s programs and operations that result in recommendations for measurable cost savings. The NASA OIG’s monetary accomplishments and recommendations in the areas of economy and efficiency significantly lag behind the accomplishments and return on investment of the federal OIG community as a whole. A reevaluation of audit planning and methods within NASA’s OIG is needed to include audits that hold NASA accountable for its stewardship of public funds through independent audits and investigations that include recommendations for economy and efficiency. Due to the importance of this issue, we believe that a reexamination of the audit strategy and planning approach within the OIG can best be accomplished with the assistance of an objective outside party with experience in these types of audits. The OIG’s budgets and staffing levels have not been adversely affected when compared to both the NASA budgets and staffing and to the budgets of other OIGs. However, the effectiveness of the OIG can be negatively affected by an environment of high staff turnover, which has especially affected audit management staff. The reasons for the relatively high rate and recent increases in employee turnover should be examined by an objective expert so that any underlying issues can be addressed and the NASA OIG can effectively meet its mission of providing objective and reliable information. The independence of the IG is central to the effectiveness of the IG’s office. The Integrity Committee, which has the authority to make recommendations regarding the outcomes of its investigations, considers the actions taken by the NASA Administrator to be insufficient, that the NASA IG’s lack of an appearance of independence is not resolved, and that the views expressed in its letter of March 20, 2007, are unchanged. Because independence is fundamental to effective oversight and professional auditing, we believe that additional follow up actions are warranted related to the Integrity Committee’s findings dealing with the appearance of a lack of independence on the part of NASA’s IG. In order to strengthen audit oversight and management of the NASA OIG, we recommend that the NASA IG include in strategic and annual planning, performance audits that address NASA’s economy and efficiency with potential monetary savings and that the OIG work closely with an objective outside party to obtain external review and consultation in the strategic and annual planning processes, and identify the causes of high employee turnover with the assistance of an objective expert, and determine actions needed as appropriate. In order to resolve the matter regarding the appearance of independence of the NASA IG, we recommend that the Integrity Committee follow up regarding its investigative finding regarding the NASA IG’s appearance of a lack of independence and make any recommendations needed. In written comments on a draft of our report, the NASA IG expressed widespread disagreement with our conclusions and recommendations and questioned the depth and scope of our evaluation. We disagree with the IG and in the following paragraphs reaffirm our conclusions and recommendations. We augmented our discussions of the scope and methodology of our work and expanded the evidentiary matter in the body of this report for issues related to the Integrity Committee’s investigation and the monetary accomplishments reported by the NASA OIG over fiscal years 2003 through 2007. We rebut what we consider the most important aspects of his disagreement in this section of the report. In addition, please refer to the appendix section of this report following our reprint of the IG’s comments (see app. IV) in which we rebut or clarify other less material matters. The Integrity Committee limited its comments to matters in our draft report concerning the committee’s investigation of allegations against the NASA IG. The Integrity Committee restates its determination that actions taken by NASA regarding the appearance of a lack of independence findings were insufficient, states that the Integrity Committee has no power to compel any particular action, and suggests that we should present our recommendation to the Chair of PCIE and ECIE. However, we see nothing in the guidance in Executive Order 12993 to prohibit the Integrity Committee from making recommendations to the Chair of PCIE and ECIE regarding its investigative finding which has not been fully resolved. Therefore, we reaffirm our recommendation to the Integrity Committee. (See app. III.) In the written comments, the NASA IG stated that the Integrity Committee investigation of allegations against him was a closed matter. He emphasized that the Integrity Committee’s views regarding the independence matter were from a historical perspective and that there was nothing to suggest that the appearance of a lack of independence was an ongoing issue. Further, he stated that the Integrity Committee had not included any recommendations in its report and that therefore, nothing is unresolved. The IG commented that we had ignored the documented final disposition of this matter in the PCIE and ECIE Chair’s April 18, 2007 letter, and that we had selectively included or excluded information to suggest that a closed matter is still open. We fully understand that the formal investigation has run its course, and we have added discussion to the body of the report to reflect the documented interactions among the Chairman of PCIE and ECIE, the Integrity Committee, the NASA Administrator, and the NASA IG. Our report acknowledges that the Integrity Committee did not make any specific recommendations to address either the investigative findings of an abusive work environment or the perception of a lack of independence. However, despite the PCIE and ECIE Chair’s acceptance of the actions proposed by the NASA Administrator and closure of the case, the Integrity Committee stated in response to our questions, that the actions were not adequate to resolve the investigative conclusion that the IG lacked an appearance of independence. As discussed in our report, the Integrity Committee told us that it did not include recommendations for corrective actions in its January 22, 2007, letter to the Chair of PCIE and ECIE regarding the results of its investigation because of the extent of the findings and a presumption that the Chair of PCIE and ECIE would take disciplinary action commensurate with these findings. These concerns are captured in the Integrity Committee’s March 20, 2007, letter to the Chair of PCIE and ECIE, which stated that “ll members of the committee further believed that disciplinary action up to and including removal, could be appropriate.” Given the Integrity Committee’s documented dissatisfaction with the corrective actions and that no actions we are aware of address the independence issue, we disagree that this matter has been fully resolved. Objective third parties with knowledge of the relevant information including that of the Integrity Committee’s investigation; the lack of actions to attempt to change perceptions; and the Integrity Committee’s continuing concern, expressed in a written response to our questions, that the actions taken were inadequate; could conclude that the appearance of independence issues have not been resolved. As a result, the decisions and actions of the IG may not be fully accepted as a basis for policy or other changes. This perspective is illustrated by the stances taken by the leadership of NASA’s oversight committees. As noted in the body of the report, in their joint letter dated April 2, 2007, the Chairman of the Subcommittee on Space, Aeronautics, and Related Matters, Senate Committee on Commerce, Science and Transportation, and the Chairman of the Subcommittee on Investigations and Oversight, House Committee on Science and Technology, requested that the President of the United States remove the NASA IG from office based on the results of the Integrity Committee’s investigation. The letter states that the oversight committees and the public are not receiving useful information from the NASA IG, one of their primary tools for oversight, and that the IG can no longer be effective in his office and should be replaced. The Integrity Committee commented that it could not concur with our recommendation because it lacked the authority to compel any particular corrective action. However, our recommendation to the Integrity Committee does not call for it to compel the corrective action, but rather to exercise its authority as allowed in Executive Order 12993 and acknowledge the concerns of its own members and make appropriate recommendations to the Chair of PCIE and ECIE for corrective action regarding its unresolved investigative finding that the NASA IG lacked an appearance of independence. The Integrity Committee confirmed its opinion that the actions taken were not sufficient and restated its opinion in the March 20, 2007 letter to the Chair of PCIE and ECIE that it supported a range of actions to be considered, up to and including removal of the NASA IG from office. Because the Integrity Committee has the authority to make recommendations within the guidance of the Executive Order, we reaffirm our report recommendation. Contrary to the NASA IG’s statement that we failed to consult with NASA OIG’s senior leadership on the important issues in this report, we met with the NASA IG and the senior OIG staff at the beginning of our review regarding our scope and methodology. We also coordinated a series of interviews through the IG’s Executive Officer with the OIG senior management officials responsible for all areas addressed in our report. In all instances, we identified the purpose of our planned contacts, and the IG’s Executive Officer scheduled meetings with those NASA OIG management staff who were best suited to address each matter. These included the Deputy Inspector General, the Counsel to the IG, the Assistant IG for Audits, the Assistant IG for Investigations, and the Assistant IG for Management and Planning. At the completion of our work we met with the NASA IG and the senior OIG staff to discuss our report findings, conclusions, and recommendations. All meetings were coordinated through the IG’s office, and we were available for any input the IG may have wished to provide. The NASA IG disagreed with our recommendation to revise approaches taken in audits to include in strategic and annual planning, performance audits that address NASA’s program results, effectiveness, and outcomes as well as audits of economy and efficiency by working closely with an objective outside party. Specifically, the NASA IG did not agree with our conclusion that the OIG’s effectiveness has been hindered by reliance on audits that do not include evaluating NASA’s program economy, efficiency, and effectiveness, and result in limited monetary accomplishments. The IG Act requires that IGs address issues of economy and efficiency and provide independent audits and investigations. We have removed our concern regarding effectiveness because of the subjective nature of evaluating the OIG’s efforts in this regard. However, as stated in our report, the NASA OIG had reported only one audit with recommendations for economy and efficiency and potential cost savings to the agency over fiscal years 2006 and 2007. Therefore, we have narrowed the focus of our report and our recommendation in order to address our major concern that the OIG has an insufficient number of economy and efficiency audits that result in reported monetary savings. In addition, the IG does not believe that our conclusions regarding audit coverage are sufficiently balanced to recognize audits that are focused on areas other than economy and efficiency. Contrary to this statement, our report provides information stating that the OIG’s audits have addressed areas designated as high-risk and management challenges. We also state that while the OIG’s audits do not adequately address the economy and efficiency of NASA’s programs and operations, they do include recommendations for improving compliance with laws, regulations, and NASA policies and procedures; internal controls; and other specific areas of NASA’s operations. The IG provided a listing of issued audit products that he said have addressed economy, efficiency, and effectiveness issues and specifically highlighted nine examples. While the report recommendations may affect the economy and efficiency of NASA’s operations, none of these reports highlighted by the IG have specific recommendations to improve NASA’s economy and efficiency with potential cost savings. In addition, the reports’ recommendations address compliance with laws, regulations, policies and procedures, internal controls, and other areas. In addition, two of the highlighted reports were not audits and made no reference to professional auditing standards. To illustrate our concerns regarding the lack of OIG audit reports with recommendations for improving NASA’s economy and efficiency, our report provides an example of an OIG audit regarding a NASA contractor’s inadequate pricing determinations. The audit recommends that the contracting officer ensure compliance with contract agreements. However, even though the OIG had the opportunity, the report did not identify any measurable cost saving to the government resulting from the inadequate pricing and made no recommendations to help ensure that pricing determinations will be accurate in the future. The NASA IG states the difference between actual monetary recoveries from investigations and potential monetary accomplishments from audits. The IG comments that the results of audits are more speculative and must rely on the implementation of management to be realized. This statement acknowledges the different purposes of audits and investigations: audits can recommend improvements to future operations, and investigations tend to focus on the identification of fraudulent and illegal activities that have occurred. Our review found that the OIG’s strategic and annual audit plans did not have goals and objectives that specifically address the economy and efficiency of NASA’s programs and operations. We had recommended that the NASA IG include in strategic and annual planning, performance audits that address NASA’s economy and efficiency with potential monetary savings and that the OIG work closely with an objective outside party, such as the PCIE, to obtain external review and consultation in the strategic and annual planning processes. The NASA IG stated his intent to benchmark with the PCIE community to provide assurance that audits address these areas. While this is a positive statement we continue to make our recommendation that the IG work closely with an objective outside party during the strategic and annual planning processes. However we no longer specify that the IG work with the PCIE Audit Committee on this issue. The NASA IG also disagrees with our recommendation to identify the causes of high employee turnover with the assistance of an objective expert and determine actions needed as appropriate. The IG states that we did not discuss employee turnover with OIG leadership. To the contrary, our discussions with OIG management, both past and present, provided the information on turnover in our report and alerted us to the problem of the OIG’s relatively high staff attrition rate. The IG also provides attrition rates of other agency OIGs that are all lower than that of the NASA OIG and supports our conclusion that the NASA OIG has a comparably high staff attrition rate even when compared to other OIGs. The IG also states that a number of steps have been taken to address the continuing significant turnover rates. We are encouraged that the IG is already taking steps in this area, however, because of the OIG’s relatively high rate of staff attrition we are recommending that the NASA IG use the assistance of an objective expert to identify the causes of employee turnover. As agreed with your offices, unless you publicly announce the contents of this report earlier, we will not distribute it until 30 days from its date. At that time we will send copies of the report to the NASA Administrator; the NASA IG; the Chairman of the Integrity Committee; the OMB Deputy Director for Management; the Chairman and Ranking Member of the Senate Committee on Commerce, Science and Transportation; interested congressional committees; and other parties. This report will also be available at no charge on the GAO Web site at http://www.gao.gov. If you have any questions or would like to discuss this report, please contact me at (202) 512-9471 or franzelj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Major contributors to this report are listed in appendix V. Amounts for TVA’s IG are from PCIE’s fiscal year 2007 profile data. Comment 1. As the Integrity Committee stated, the information related to its activities in this report was obtained in connection with our separate ongoing audit of the activities and operations of the Integrity Committee. The following are GAO’s comments on the NASA Inspector General’s letter dated October 14, 2008. The meetings referred to by the IG do not result in an endorsement of the OIG’s work plans but rather are discussions between the NASA OIG and GAO for purposes of coordination and cooperation. Our review of the NASA OIG’s strategic and annual plans on this audit was in response to our findings regarding the need for additional oversight of NASA’s economy and efficiency and measurable potential cost savings from OIG audits. We are encouraged by the NASA IG’s intent to benchmark with the PCIE community to provide assurance that audits addressing program effectiveness, economy, and efficiency fulfill the OIG mission. Therefore, we have modified our report recommendation to have the OIG work closely with an objective outside party to include audits of NASA’s economy and efficiency with potential monetary savings in strategic and annual plans. The IG’s comments do not cite any specific recommendations that are targeted toward economy and efficiency with potential cost savings. In addition, we reviewed all the recommendations in the OIG’s audit products issued during fiscal years 2006 and 2007 and found only one report with these types of recommendations. By providing the titles of OIG reports, the IG provides little if any additional information on whether economy and efficiency issues were addressed by the outcomes of the reports. Therefore, our report continues to focus on the OIG’s lack of economy and efficiency audit results with measurable cost savings as well as the lack of a strategy for dealing with these types of objectives in the annual and strategic audit plans. The OIG’s audit of NASA’s plan for Space Shuttle transition concluded that NASA’s transition plan did not comprehensively address all elements critical for a successful transition and recommended that the planning be enhanced and that the transition be recognized as an agency management challenge. Neither this report nor any other OIG report issued during fiscal years 2006 and 2007 had any specific recommendations to improve NASA’s economy and efficiency with measurable cost savings related to this important and costly transition program. We did not include the percent of OIG recommendations implemented as part of our review. We evaluated the substance of the recommendations to determine whether they identified opportunities for improvements to NASA’s economy and efficiency with measurable potential cost savings. This investigation was a coordinated effort by several offices including those from NASA, components of the Department of Defense, and academia, but was not performed by the NASA OIG. Our scope was to review the results of audits and investigations by the OIG and thus,we did not include it in our review. The IG also provided the title of an investigative report regarding allegations that NASA had suppressed climate change information as an example of an accomplishment. This information does not deal with the economy and efficiency of NASA’s programs and operations and monetary accomplishments which is a focus of our findings and recommendations. We did not consider DCAA audits to be related to the NASA OIG’s accomplishments since they are routinely provided as a service to NASA’s contracting officers. The NASA IG points out that in our comparison of monetary accomplishments and the return on investment by 30 IG offices where the IGs are appointed by the President and confirmed by the Senate, our presentation of monetary accomplishments for the Department of Agriculture OIG and the Department of Homeland Security OIG includes the accomplishments of DCAA. Accordingly, we have removed the DCAA amounts from the accomplishments reported for these OIGs and adjusted the total monetary accomplishments for all 30 OIGs for fiscal year 2007 from a $9.52 dollar return on all their budgets to a $9.49 return. This did not affect the status of the NASA OIG’s monetary accomplishments, which continues to be a $0.36 dollar return on the OIG’s budgetary resources and continues to rank 27th out of 28 OIGs reporting such accomplishments for fiscal year 2007. We selected fiscal year 2007 for comparison because at the time of our review it was the most recent full year with comparative data among the OIGs, and it was the year with the largest reported dollar savings resulting from NASA OIG’s audits. However, in response to the NASA IG’s suggestion that we provide data on accomplishments over a 5-year period, we added table 4 to our report. It shows the NASA OIG’s monetary accomplishments from fiscal years 2003 through 2007 and further supports our conclusion that the OIG provides limited monetary accomplishments from audits. We agree that the NASA IG should be committed to the full range of activities and objectives stated in the IG Act. Those objectives and activities include audits that result in recommendations to improve the economy and efficiency of NASA’s programs and operations with measurable cost savings, as well as receiving whistleblower complaints. Our review compared the NASA OIG’s total budgetary resources to NASA’s total budgetary resources for fiscal years 2003 through 2007 to determine whether the OIG’s budgets were increasing or decreasing relative to NASA’s overall budgets. As stated in our report, the OIG’s budget as a percent of NASA’s budget increased from 0.15 to 0.17 percent during this period. The NASA IG states that $3.5 million in funding was shifted from NASA to the OIG to pay for financial audits but cites this as having no actual impact on funds available for OIG operations. We disagree that these funds do not contribute to the resources available for OIG operations. The NASA IG is subject to the Chief Financial Officers Act of 1990 which specifies that the IG is responsible for the financial statement audits of the agency. The increase of $3.5 million provides resources in the OIG’s budget for these mandated audits. As stated in our report, over the 5-year period we reviewed the OIG’s budgets kept pace with or were slightly better than NASA’s budgets as a whole. In addition, when compared to other OIGs for fiscal year 2007, the NASA OIG ranked 11th out of 30 agencies in the ratio of the OIGs’ budgets to their agencies’ budgets. In addition to the contact named above, Jackson Hufnagle, Assistant Director; Francis Dymond; Jacquelyn Hamilton; Jason Kirwan; and Clarence Whitt made key contributions to this report.
GAO was asked to review the National Aeronautics and Space Administration (NASA) Office of Inspector General (OIG) and provide information on (1) the audit and investigative coverage of NASA; (2) the NASA OIG's audit and investigative accomplishments; (3) the NASA OIG's budget and staffing levels, including staff attrition rates; and (4) the results of external reviews of the NASA OIG. GAO obtained information from NASA OIG reports, interviews, and documentation. The fundamental mission of the statutory federal IG offices, including the NASA OIG, includes identifying areas for improved economy, efficiency, and effectiveness through independent and objective oversight and preventing and detecting fraud, waste, and abuse. Of the 71 reports issued by the OIG's Office of Audits in fiscal years 2006 and 2007, only 1 report had recommendations to address the economy and efficiency of NASA's programs and operations with measurable monetary accomplishments. Over the 5-year period of fiscal years 2003 through 2007, audit reports contributed to only 1 percent of the OIG's total monetary accomplishments. The remaining 99 percent came from the OIG's investigative cases. Of about $9 million in total reported monetary accomplishments from audits over the 5-year period, almost $7 million was from one audit completed in fiscal year 2007. When the monetary accomplishments of both audits and investigations in fiscal year 2007 are combined and compared to the OIG's budget of $34 million, the return for each budget dollar is $0.36. This calculation for all 30 OIGs with IGs appointed by the President and confirmed by the Senate averages $9.49, or 26 times that of the NASA OIG. The OIG's relative lack of monetary accomplishments from audits is due, at least in part, to the OIG's strategic and annual audit plans, which do not provide assurance that NASA's economy and efficiency will be addressed or that measurable monetary accomplishments will be achieved. We believe that during the planning process, the OIG should consult with an objective third party with experience in providing economy and efficiency audits with potential monetary savings. The OIG's budgets and staffing kept pace or did slightly better than all of NASA for these same resources during fiscal years 2003 through 2007. When comparing the fiscal year 2007 budgets of all 30 IGs appointed by the President and confirmed by the Senate with their respective agencies' budgets, the NASA OIG ranked 11th. Nevertheless, GAO noted that the OIG's ability to retain experienced audit personnel was adversely affected by a staff attrition rate that has increased from 12 percent to almost 20 percent over fiscal years 2003 through 2007. Due to the relatively high attrition rates, GAO believes that the OIG should use the assistance of an objective expert to identify the causes of staff turnover. The NASA OIG's most recent peer reviews for both audits and investigations have resulted in unqualified opinions. A recent investigation by the Integrity Committee of the President's Council on Integrity and Efficiency and the Executive Council on Integrity and Efficiency reported that the NASA IG had an appearance of a lack of independence. The investigation was closed, but corrective actions did not address this finding and the Integrity Committee considers the issue unresolved. This issue has been raised by members of the Congress as a limitation in obtaining independent oversight of NASA.
The United States has experienced dramatic changes in mobile phone use since nationwide cellular service became available in the mid-1980s. For example, the number of estimated mobile phone subscribers has grown from about 3.5 million in 1989 to approximately 286 million by the end of 2009, according to the most recent data reported by FCC. Further, the number of Americans who rely exclusively on mobile phones for voice service has increased in recent years. For example, by the end of 2009 over 50 percent of young adults aged 25 to 29 relied exclusively on mobile phones, according to the most recent FCC data. The way individuals use mobile phones has also changed. For instance, while average minutes of use per mobile phone subscriber per month has declined in recent years, mobile text messaging traffic has increased. About 88 percent of teenage mobile phone users now send and receive text messages, which is a rise from the 51 percent of teenagers who texted in 2006. Mobile phones are low-powered radio transceivers—a combination transmitter and receiver—that use radio waves to communicate with fixed installations, called base stations or cell towers. The radio waves used by mobile phones are a form of electromagnetic radiation—energy moving through space as a series of electric and magnetic waves. The spectrum of electromagnetic radiation comprises a range of frequencies from very low, such as electrical power from power lines, through visible light, to extremely high, such as gamma rays, as shown in figure 1. The portion of the electromagnetic spectrum used by mobile phones—as well as other telecommunications services, such as radio and television broadcasting— is referred to as the RF spectrum. The electromagnetic spectrum includes ionizing and non-ionizing radiation. Ionizing radiation, such as gamma rays, has energy levels high enough to strip electrons from atoms and molecules, which can lead to serious biological damage, including the production of cancers. RF energy, on the other hand, is in the non-ionizing portion of the electromagnetic spectrum, which lacks the energy needed to cause ionization. However, RF energy can produce other types of biological effects. For example, it has been known for many years that exposure to high levels of RF energy, particularly at microwave frequencies, can rapidly heat biological tissue. This thermal effect can cause harm by increasing body temperature, disrupting behavior, and damaging biological tissue. The thermal effect has been successfully harnessed for household and industrial applications, such as cooking food and molding plastics. Since mobile phones are required to operate at power levels well below the threshold for known thermal effects, the mobile phone health issue has generally focused on whether there are any adverse health effects from long-term or frequent exposure to low-power RF energy emissions that are not caused by heating. Scientific research to date has not demonstrated adverse human health effects from RF energy exposure from mobile phone use, but additional research may increase understanding of possible effects. In 2001, we reported that FDA and others had concluded that research had not shown RF energy emissions from mobile phones to have adverse health effects, but that insufficient information was available to conclude mobile phones posed no risk. Following another decade of scientific research and hundreds of studies examining health effects of RF energy exposure from mobile phone use, FDA maintains this conclusion. FDA stated that while the overall body of research has not demonstrated adverse health effects, some individual studies suggest possible effects. Officials from NIH, experts we interviewed, and a working group commissioned by IARC— the World Health Organization’s agency that promotes international collaboration in cancer research—have reached similar conclusions. For example, in May 2011 IARC classified RF energy as “possibly carcinogenic to humans.” IARC determined that the evidence from the scientific research for gliomas, a type of cancerous brain tumor, was limited—meaning that an association has been observed between RF energy exposure and cancer for which a causal relationship is considered to be credible, but chance, bias, or confounding factors could not be ruled out with reasonable confidence. With respect to other types of cancers, IARC determined that the evidence was inadequate—meaning that the available studies are of insufficient quality, consistency, or statistical power to permit a conclusion about the causal association. Additionally, in April 2012 an advisory group to the Health Protection Agency—an independent organization established by the United Kingdom government to protect the public from environmental hazards and infectious diseases—concluded that although there is substantial research on this topic, there is no convincing evidence that RF energy below guideline levels causes health effects in adults or children. A broad body of research is important for understanding the health effects of RF energy exposure from mobile phone use, because no single study can establish a cause-and-effect relationship and limitations associated with studies can make it difficult to draw conclusions. Two types of studies, epidemiological and laboratory, are used in combination to examine effects from mobile phones. Epidemiological studies investigate the association, if any, between health effects and the characteristics of people and their environment. Laboratory studies conducted on test subjects—including human volunteers, laboratory animals, biological tissue samples, or isolated cells—are used to determine a causal relationship between possible risk factors and human health, and the possible mechanisms through which that relationship occurs. Studies we reviewed suggested and experts we interviewed stated that epidemiological research has not demonstrated adverse health effects from RF energy exposure from mobile phone use, but the research is not conclusive because findings from some studies have suggested a possible association with certain types of tumors, including cancerous tumors. Findings from one such study, the INTERPHONE study, were published in 2010. This retrospective case-control study with more than 5,000 cases examined the association between mobile phone use and certain types of brain tumors, including cancerous tumors, in individuals aged 30-59 years in 13 countries. Overall study findings did not show an increased risk of brain tumors from mobile phone use, but at the highest level of exposure, findings suggested a possible increased risk of glioma. Other epidemiological studies have not found associations between mobile phone use and tumors, including cancerous tumors. For example, findings from a nationwide cohort study conducted in Denmark that originally followed 420,095 individuals did not show an association between increased risk for certain types of tumors, including cancerous tumors, and mobile phone use. Additionally, findings from a subset of the cohort—56,648 individuals with 10 or more years since their first mobile phone subscription—did not show an increased risk for brain and nervous system tumors. Further, these findings did not change for individuals in the cohort with 13 or more years since their first mobile phone subscription. Also, the CEFALO study—an international case- control study that compared children aged 7 to 19 diagnosed with certain types of brain tumors, including brain cancers, to similar children who were not diagnosed with brain tumors—found no relationship between mobile phone use and risk for brain tumors. Findings from another study, which was conducted by NIH and examined trends in brain cancer incidence rates in the United States using national cancer registry data collected from 1992 to 2006, did not find an increase in new cases of brain cancer, despite a dramatic increase in mobile phone use during this time period. Limitations associated with epidemiological studies can make it difficult to draw definitive conclusions about whether adverse health effects are linked to RF energy exposure from mobile phone use. One such limitation is that it is difficult to measure and control for all variables that may affect results. For example, it can be difficult to accurately measure RF energy exposure from mobile phone use because humans are exposed to RF energy from many sources within their environments and mobile phone technology and user patterns frequently change. Also, epidemiological studies to date have been limited in their ability to provide information about possible effects of long-term RF energy exposure because the prevalence of long-term mobile phone use is still relatively limited and some tumors, including some cancerous tumors, do not develop until many years after exposure. In addition, epidemiological studies, specifically cohort studies, are sometimes limited in their ability to provide information about increased risks for rare outcomes, such as certain types of brain tumors. To address challenges with assessing rare outcomes, case-control studies, which collect information about past mobile phone use among study participants, may be undertaken with large numbers of cases and controls. While these studies can potentially provide information on long-term use, and include enough cancer cases to examine whether this use is associated with rare diseases, collecting data in this way can introduce bias, such as recall bias, into study data and further limit findings. To mitigate this potential bias, some epidemiological studies, specifically cohort studies, follow large populations over time and collect data about mobile phone use before participants develop a certain outcome. In spite of these limitations, experts we spoke with told us that epidemiological studies are a key component of the body of research used for assessing the health effects of mobile phones. Studies we reviewed suggested and experts we interviewed stated that laboratory research has not demonstrated adverse human health effects from RF energy exposure from mobile phone use, but the research is not conclusive because findings from some studies have observed effects on test subjects. Some laboratory studies have examined whether RF energy has harmful effects by exposing samples of human and animal cells to RF energy over a range of dose rates, durations, and conditions to detect any changes in cellular structures and functions. For example, some studies have examined the effects of RF energy on deoxyribonucleic acid (DNA) in rodent and human cells. While some of these studies found that RF energy exposure damaged DNA, others failed to replicate such an effect using similar experimental conditions. Other studies have exposed laboratory animals to RF energy, examined the animals for changes, and compared outcomes with a control group. For example, some studies have measured the behavior or cognitive functioning of rats to assess the neurological effects of RF energy. According to some studies we reviewed, while some of these studies have observed changes in behavior and cognitive function, overall, these studies have not consistently found adverse effects from RF energy levels emitted from mobile phones. Laboratory studies also have exposed human volunteers to RF energy to investigate possible effects, such as effects on the neurological system or blood pressure. According to studies we reviewed, some studies on human volunteers have observed changes, such as changes in brain activity, but the implications of these physiological changes in relation to adverse effects on human health are unknown. Limitations associated with laboratory studies can make it difficult to draw conclusions about adverse human health effects from RF energy exposure from mobile phone use. For example, studies conducted on laboratory animals allow researchers to examine the effects of RF energy exposure on animal systems, but this type of research is limited because effects on laboratory animals may not be the same on humans. Additionally, studies on test subjects may observe biological or physiological changes, but in some circumstances it is unclear how or even if these changes affect human health. Further, to increase the strength of the evidence that observed changes in laboratory studies are the effect of RF energy exposure, studies must be replicated and confirmed with additional research using different dose rates, durations, and conditions of RF energy while observing similar effects. To date, according to FDA officials and some experts we interviewed, only a few laboratory studies that have shown effects from RF energy have been replicated, and some replicated studies have not confirmed earlier results. Studies we reviewed and experts we interviewed identified key areas for additional epidemiological and laboratory studies, and according to experts, additional research may increase understanding of any possible effects. For example, additional epidemiological studies, particularly large long-term prospective cohort studies and case-control studies on children, could increase knowledge on potential risks of cancer from mobile phone use. Also, studies and experts identified several areas for additional laboratory studies. For example, additional studies on laboratory animals as well as human and animal cells examining the possible toxic or harmful effects of RF energy exposure could increase knowledge on potential biological and health effects of RF energy. Further, additional laboratory studies on human and animal cells to examine non-thermal effects of RF energy could increase knowledge of how, if at all, RF energy interacts with biological systems. However, some experts we spoke to noted that, absent clear evidence for adverse health effects, it is difficult to justify investing significant resources in research examining non- thermal effects of RF energy from mobile phone use. Another area identified for additional laboratory research is studies on human volunteers examining the effect of changes in the neurological system, which could help determine if these possible observed changes in neurological functioning from RF energy are adverse effects. In addition to conducting additional research, experts we interviewed reported that the broader body of evidence on RF energy should be re-evaluated when findings from key studies become available, to determine whether additional research in certain areas is still warranted. Current research activities of federal agencies, international organizations, and the mobile phone industry include funding and supporting ongoing research on the health effects of RF energy exposure from mobile phones. NIH is the only federal agency we interviewed that is directly funding ongoing studies on health effects of RF energy from mobile phone use. NIH officials reported that the agency has provided about $35 million for research in this area from 2001 to 2011. (See table 1 for more information on ongoing studies funded by NIH.) Although other federal agencies are not directly funding research in this area, some agencies are providing support for ongoing studies. For example, FDA officials reported that FDA’s National Center for Toxicological Research, with funding provided by NIH as part of the National Toxicology Program, is conducting studies on rat and bovine brain cells to examine whether RF Also, CDC officials reported energy emitted from mobile phones is toxic.that the agency is collaborating with others to conduct ongoing studies in this area. For example, CDC officials reported that one of the agency’s staff is collaborating with researchers in seven countries to conduct additional analyses on data collected through the INTERPHONE study to determine whether occupational exposure to RF energy and chemicals was a risk factor for brain cancer. Federal agencies are also engaged in other activities to support research on the health effects of mobile phone use. For example, FDA collaborates with other organizations on research-related projects. According to FDA officials, the agency helped the World Health Organization develop its WHO Research Agenda for Radiofrequency Fields in 2001 and has provided comments to the World Health Organization on updates to this research agenda.responsibility for different aspects of RF energy safety and work—CDC, EPA, FCC, FDA, NIH, the National Telecommunications and Information Administration, and OSHA—are members of the Radiofrequency Interagency Work Group, which works to share information on RF energy related projects at the staff level. According to FCC and FDA officials, this group periodically meets to discuss RF energy related issues, including recently published and ongoing research on the health effects of RF energy exposure. Also, officials from federal agencies that have International organizations also support research on health effects of RF energy exposure from mobile phone use. Officials from IARC told us that the organization is currently supporting research activities for ongoing studies examining health effects of mobile phone use with respect to cancer. For example, IARC is involved in the identification of research sites for and implementation of the COSMOS study—a large international, prospective, cohort study that will follow individuals for 25 or more years to examine possible long-term health effects of using mobile phones, such as brain tumors, including cancers, and other health outcomes. IARC is also coordinating additional data analyses on previously published studies examining mobile phone health effects. For example, IARC is coordinating additional analyses of data collected for the INTERPHONE study. Additionally, the European Commission—the European Union’s executive body that represents the interest of Europe as a whole—is supporting research in this field. Under its research program—the Seventh Framework Programme—the European Commission has provided funds for the MOBI-KIDS study, an international case-control study examining the possible association between communication technology, including mobile phones and other environmental exposures, and the risk of brain tumors in people aged 10 to 24 years. The mobile phone industry supports research by providing funding for studies. According to representatives from mobile phone manufacturers, service providers, and industry associations, most industry funding for scientific research is provided by the Mobile Manufacturers Forum—an international not-for-profit association that is largely comprised of wireless device manufacturers. According to representatives from the Mobile Manufacturers Forum, the association has provided about $46 million for RF energy research since 2000 and is currently providing support for epidemiological and laboratory studies. Although representatives from all four mobile phone manufacturers that we interviewed reported that their companies support research through their industry associations, representatives from one manufacturer reported that it is also funding two studies examining the effects of RF energy emitted from mobile phones on human hands and the head. In 1996, FCC adopted the RF energy exposure limit for mobile phones of 1.6 watts per kilogram, averaged over one gram of tissue, a measurement of the amount of RF energy absorbed into the body. FCC developed its limit based on input from federal health and safety agencies as well as the 1991 recommendation by the Institute of Electrical and Electronics Engineers (IEEE) that was subsequently approved and issued in 1992 by the American National Standards Institute (ANSI).recommended limit was based on evidence related to the thermal effects This of RF energy exposureexposure—and was set at a level well below the threshold for such effects. FCC noted that the limit provided a proper balance between protecting the public from exposure to potentially harmful RF energy and allowing industry to provide telecommunications services to the public in the most efficient and practical manner possible. —the only proven health effects of RF energy In 2006, IEEE published its updated recommendation for an RF energy exposure limit of 2.0 watts per kilogram, averaged over 10 grams of tissue. exposure from mobile phone use, although actual exposure depends on a number of factors, including the operating power of the phone, how the phone is held during use, and where it is used in proximity to a mobile phone base station. According to IEEE, improved RF energy research and a better understanding of the thermal effects of RF energy exposure on animals and humans, as well as a review of the available scientific research, led to the change in recommended RF energy exposure limit. IEEE’s new recommended limit was harmonized with a 1998 recommendation of the International Commission on Non-Ionizing Radiation Protection, which has been adopted by more than 40 countries, including the European Union countries. Both of these recommendations call for an exposure limit of 2.0 watts per kilogram averaged over 10 grams of tissue, which according to IEEE represents a scientific consensus on RF energy exposure limits. See IEEE Std. C95.1-2005. According to senior FCC officials, the agency has not adopted any newer limit because federal health and safety agencies have not advised them to do so. FCC officials told us that they rely heavily on the guidance and recommendations of federal health and safety agencies when determining the appropriate RF energy exposure limit and that, to date, none of these agencies have advised FCC that its current RF energy limit needs to be revised. Officials from FDA and EPA told us that FCC has not formally asked either agency for an opinion on the RF energy limit. FDA officials noted, though, that if they had a concern with the current RF energy exposure limit, then they would bring it to the attention of FCC. Although federal guidance states that agencies should generally use consensus standards, FCC officials provided reasons why they did not have current plans to change the RF energy exposure limit. Office of Management and Budget Circular A-119 concerning federal use of technical standards states that federal agencies must use “consensus standards in lieu of government-unique standards,” except where inconsistent with law or otherwise impractical. FCC officials noted that no determination has been made that the new recommended RF energy exposure limit is inconsistent with law or impractical. FCC has recognized that research on RF energy exposure is ongoing and pledged to monitor the science to ensure that its guidelines continue to be appropriate. FCC officials noted that an assessment of the current limit and the new recommended limit could be accomplished through a formal rulemaking process, which would include a solicitation of information and opinions from federal health and safety agencies. FCC could alternatively release a Notice of Inquiry to gather information on this issue without formally initiating rulemaking. Stakeholders we spoke with varied on whether the current U.S. RF energy exposure limit should be changed to reflect the new recommended limit. For instance, a few experts and consumer groups we spoke with said FCC should not adopt the new recommended exposure limit because of the relative uncertainty of scientific research on adverse health effects from mobile phone use. An official from one consumer group told us that adopting the 2.0 watts per kilogram exposure limit would be a step back, since it could allow users to be exposed to higher radiation levels. Conversely, some experts we spoke with maintained that both the 1.6- and 2.0-watts-per-kilogram limits protect users from the thermal effects of RF energy exposure—which the experts maintained are the only conclusively demonstrated effects of exposure—since a safety factor of fifty was applied to obtain the limits, meaning that the maximum permitted exposure is a fiftieth of what was determined to be the exposure at which potentially deleterious thermal effects are likely to occur. Nevertheless, by not formally reassessing its current RF energy exposure limit, FCC cannot ensure that it is using a limit that reflects the latest evidence on thermal effects from RF energy exposure, and may impose additional costs on manufacturers and limitations on mobile phone design. FCC’s current limit was established based on recommendations made more than 20 years ago. According to IEEE, the new recommended limit it developed is based on significantly improved RF research and therefore a better understanding of the thermal effects of RF energy exposure. Additionally, three of the four mobile phone manufacturers we spoke with favored harmonization of RF energy exposure limits, telling us that maintaining the separate standards can result in additional costs and may affect phone design in a way that could limit performance and functionality. According to some manufacturers we spoke with, many of their phones are sold in multiple countries. As a result, the manufacturers have to develop and test phones based on different exposure limits, which can require additional resources and slow the time it takes to get new phones into the market. Additionally, one manufacturer indicated that some features are not enabled on phones sold in the United States that are available in other countries to comply with FCC’s current limit. A reassessment by FCC would help it to determine if any changes to the limit are appropriate. FCC ensures compliance with its RF energy exposure limit by certifying all mobile phones sold in the United States. In its application for certification, manufacturers must provide evidence that their mobile phones meet FCC’s RF energy exposure limit. FCC has authorized 23 TCBs in the United States and other countries to review applications that involve evaluation of RF exposure test data and issue certifications on behalf of the agency. TCBs are private organizations that have been accredited to perform these functions. TCBs now perform the majority of mobile phone certifications, with FCC generally only handling the more complex certifications, such as mobile phones with multiple transmitters using third generation and fourth generation technology.illustrates the mobile phone certification process. Representatives from mobile phone manufacturers we spoke with were generally satisfied with how TCBs review and certify mobile phones, but noted that complex certifications handled by FCC can take a long time to process. For instance, since there are generally no established test procedures for new technologies, FCC must work with the manufacturer to develop appropriate procedures by which the agency can determine if the device meets the RF energy exposure limit. According to FCC, part of this review may result in changes to testing guidance. For example, representatives from one manufacturer told us that FCC may take many months to process an application for a newer product. FCC officials told us that over the last 10 years, the average time to review an application submitted directly to the agency has ranged from 45 to 60 days. Representatives from one TCB we spoke with noted that the TCB review can be as short as a week, though FCC does not collect data on how long it takes TCBs to process applications. To ensure that mobile phones comply with FCC’s RF energy exposure limit, manufacturers conduct tests at their own laboratories or have the testing conducted for them by private laboratories. Laboratories must follow standardized FCC testing procedures or work with FCC to develop acceptable alternatives in some complex cases. These procedures require that the SAR be measured to ensure the mobile phone’s compliance with the FCC exposure limit, which was designed to ensure that mobile phones do not expose the public to levels of RF energy that could be potentially harmful. FCC periodically updates the testing procedures as new mobile phone technology is introduced. A typical testing set-up is shown in figure 3. FCC has implemented standardized testing procedures requiring mobile phones to be tested for compliance with the RF energy exposure limit when in use against the ear and against the body while in body-worn accessories, such as holsters, but these requirements may not identify the maximum exposure under other conditions. The specific minimum separation distance from the body is determined by the manufacturer (never to exceed 2.5 centimeters), based on the way in which the mobile phone is designed to be used. The results of these testing requirements are two different values: a maximum SAR value for the head and a maximum SAR value for the body. However, these testing procedures may not identify the maximum SAR for the body, since some consumers use mobile phones with only a slight distance, or no distance, between the device and the body, such as placing the phone in a pocket while using an ear piece. Using a mobile phone in this manner could result in RF energy exposure above the maximum body-worn SAR determined during testing, although that may not necessarily be in excess of the FCC’s limit. In such a case, exposure in excess of FCC’s limit could occur if the device were to transmit continuously and at maximum power. FCC has not reassessed its testing requirements to ensure that testing identifies the maximum RF energy exposure for the other usage conditions a user could experience when mobile phones are in use without body-worn accessories or as advised by the manufacturer’s instructions, rather than the head. Although FCC officials said that they provide case-by-case guidance for many mobile phones operating with new technologies, they do not require testing of mobile phones when used without body-worn accessories unless such conditions are specifically identified by the manufacturer’s operating instructions. Representatives of some consumer groups we spoke with expressed concern about the exposure to RF energy that can come with such use. Officials from IEEE, though, told us that the average power and resultant radiation level of mobile phones while in use is very low, such that even when a mobile phone is used against the body it is unlikely that the RF energy exposure would exceed the FCC limit. Nevertheless, FCC has not reassessed its testing requirements to ensure that mobile phones do not exceed the RF energy exposure limit in all possible usage conditions. Beyond the testing required for certification, FCC also ensures that mobile phones meet its RF energy exposure limit by reviewing information collected as part of routine surveillance of mobile phones on the market. FCC requires TCBs to carry out this post-market surveillance program, through which each TCB tests one percent of the mobile phones they have certified for RF energy exposure, to ensure that the phones continue to meet FCC’s RF energy exposure limit. According to FCC, no mobile phone tested under this surveillance program has been found in violation of the RF energy exposure limit. Federal agencies provide information to the public on the health effects of mobile phone use and related issues primarily through their websites. This information includes summaries of research, and agencies’ conclusions about the health effects of mobile phone use, as well as suggestions for how mobile phone users can reduce their exposure to RF energy. Table 2 summarizes selected information on mobile phones and health provided by six federal agencies on their websites. The types of information that federal agencies’ websites provide on mobile phone health effects and related issues vary, in part because of the agencies’ different missions, though the websites provide a broadly consistent message. For instance, NIH primarily provides information about the research on health effects of RF energy exposure from mobile phone use, while FCC provides information on how mobile phones are tested and certified. Nevertheless, the concluding statements about whether RF energy exposure from mobile phone use poses a risk to human health are generally consistent across selected federal agencies’ websites that we reviewed, though the specific wording of these concluding statements varies. Representatives from some consumer groups and experts we spoke with raised concerns that the information on federal agency websites about mobile phone health effects is not precautionary enough, among other things. In particular, these representatives and experts said that federal agencies should include stronger precautionary information about mobile phones because of the uncertain state of scientific research on mobile phone health effects as well as the fact that current testing requirements may not identify the maximum possible RF energy exposure. Representatives from one consumer group also said that federal agency websites should provide more consumer information, such as the impact of different mobile phone technologies on RF energy exposure. Officials from FCC and NIH maintained that the information on their websites reflects the latest scientific evidence and provides sufficient information for consumers concerned about potential health effects related to mobile phones. Some consumer groups noted that they would like FCC to mention IARC’s recent classification of RF energy exposure as “possibly carcinogenic” on FCC’s website. FCC noted that it generally defers to the health and safety agencies for reporting on new research, though FCC’s website did include information on the recent INTERPHONE study when we reviewed the site in June 2012. FCC does provide links to CDC, EPA, FDA, and other websites, some of which have information about the IARC’s classification. FDA notes on its website that the IARC classification means there is limited evidence showing RF carcinogenicity in humans and insufficient evidence of carcinogenicity in experimental animals. Some local governments are taking steps to provide precautionary information to consumers. For example, the city of San Francisco has developed a Web page on mobile phone health issues, including steps to reduce RF energy exposure from mobile phone use, and has passed an ordinance requiring local mobile phone retailers to distribute a flyer on ways that consumers can reduce their exposure. The mobile phone industry provides information to consumers on the health effects of mobile phone use and related issues through user manuals and websites. The information provided in user manuals by manufacturers is voluntary, as there are no federal requirements that manufacturers provide any specific information to consumers about the Most manuals we reviewed provide health effects of mobile phone use.information about how the device was tested and certified, as well as the highest energy exposure measurement associated with the device. Some manufacturers also provide suggestions, often based on information from FDA, to consumers about how to minimize their exposure, among other things. All manuals we reviewed, except one, include a statement that, when used on the body, as opposed to against the ear, a minimum distance between the body and the mobile phone should be maintained. These distances ranged from 1.5 to 2.5 centimeters. Since all mobile phones are tested for RF energy exposure compliance at a distance from the body, as discussed previously in this report, these instructions are consistent with how the devices were tested and certified by FCC. Some consumer groups and experts we spoke with noted that consumers could be unaware of these instructions if they do not read the entire user manual. FCC’s current RF energy exposure limit for mobile phones, established in 1996, may not reflect the latest evidence on the thermal effects of RF energy exposure and may impose additional costs on manufacturers and limitations on mobile phone design. FCC regulates RF energy emitted from mobile phones and relies on federal health and safety agencies to help determine the appropriate RF energy exposure limit. However, FCC has not formally asked FDA or EPA for their assessment of the limit since 1996, during which time there have been significant improvements in RF energy research and therefore a better understanding of the thermal effects of RF energy exposure. This evidence has led to a new RF energy exposure limit recommendation from international organizations. Additionally, maintaining the current U.S. limit may result in additional costs for manufacturers and impact phone design in a way that could limit performance and functionality. Reassessing its current RF energy exposure limit would ensure that FCC’s limit protects the public from exposure to RF energy while allowing industry to provide telecommunications services in the most efficient and practical manner possible. The current testing requirements for mobile phones may not identify the maximum RF energy exposure when tested against the body. FCC testing requirements state that mobile phone tests should be conducted with belt-clips and holsters attached to the phone or at a predetermined distance from the body. These requirements were developed by FCC to identify the maximum RF energy exposure a user could experience when using a mobile phone, to ensure that the mobile phone meets the agency’s RF energy exposure limit. This limit was designed to ensure that mobile phones do not expose the public to levels of RF energy that could be potentially harmful. By testing mobile phones only when at a distance from the body, FCC may not be identifying the maximum exposure, since some users may hold a mobile phone directly against the body while in use. Using a mobile phone in this manner could result in RF energy exposure above the maximum body-worn SAR determined during testing, although that may not necessarily be in excess of FCC’s limit. Reassessing its testing requirements would allow FCC to ensure that phones used by consumers in the United States do not result in RF energy exposure in excess of FCC’s limit. We recommend that the Chairman of the FCC take the following two actions: Formally reassess the current RF energy exposure limit, including its effects on human health, the costs and benefits associated with keeping the current limit, and the opinions of relevant health and safety agencies, and change the limit if determined appropriate. Reassess whether mobile phone testing requirements result in the identification of maximum RF energy exposure in likely usage configurations, particularly when mobile phones are held against the body, and update testing requirements as appropriate. We provided a draft of this report to the Department of Commerce, Department of Defense, Department of Health and Human Services, Department of Labor, EPA, and FCC for review and comment. FCC provided comments in a letter from the Chief, Office of Engineering and Technology. (See app. III.) In this letter, FCC noted that FCC's staff has independently arrived at the same conclusions about the RF exposure guidelines as GAO. FCC also noted that a draft Order and Further Notice of Proposed Rulemaking, along with a new Notice of Inquiry, which has been submitted by FCC staff to the Commission for their consideration, has the potential to address the recommendations made in this report. We agree that FCC’s planned actions may address our recommendations. However, since FCC has not yet initiated a review of the RF energy exposure limit or mobile phone testing requirements, our recommendations are still relevant. FCC and the Departments of Commerce, Defense, and Health and Human Services also provided technical comments, which were incorporated as appropriate. The Department of Labor and EPA did not provide comments on the draft. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the appropriate congressional committees, the Chairman of the FCC, the Administrator of the EPA, as well as the Secretaries of the Departments of Commerce, Defense, Health and Human Services, and Labor. The report will also be available at no charge on GAO’s website at http://www.gao.gov. If you or your staff have any questions or would like to discuss this work, please contact Mark Goldstein at (202) 512-2834 or goldsteinm@gao.gov or Marcia Crosse at (202) 512-7114 or crossem@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Individuals making key contributions to this report are listed in appendix IV. To determine what is known about the human health effects of radio- frequency (RF) energy exposure from mobile phone use, we reviewed selected studies including studies and reports that review and assess the scientific research, such as meta-analyses and government reports, as well as key individual epidemiological and laboratory studies. We identified 384 studies that examine the health effects of RF energy emitted from mobile phone use through literature searches and interviews. We conducted literature searches in six online databases with health and engineering content—Embase, Inspec, Medline, National Technical Information Service Bibliographic, SciSearch, and SocialSciSearch—containing peer-reviewed publications and government reports to identify studies published from January 2006 through September 2011 using health-, mobile phone-, and RF energy-related search terms. Additionally, we interviewed officials from federal agencies and representatives of academic institutions, consumer groups, and industry associations to identify studies published through December 2011. To select studies for our review, we conducted a preliminary review of the 384 studies and included those that met the following criteria: (1) reviewed and assessed the scientific research in a systematic way, such as meta-analyses, and discussed their methods for identifying, selecting, and assessing the scientific research that were used to draw conclusions or (2) were key reports that identify areas for additional research in these fields, such as the 2008 National Research Council’s Identification of Research Needs Relating to Potential Biological or Adverse Health Effects of Wireless Communication. We selected 38 studies that met these criteria. (See app. II for a list of the 38 studies we reviewed.) To collect information on the 38 selected studies, we developed a data collection instrument that contained 16 open- and closed-ended questions about the entity or entities that published and funded the study; the study methods, key findings, and limitations; and additional research needs. To apply this data collection instrument, one analyst reviewed each study and recorded information in the data collection instrument. A second analyst then reviewed each completed data collection instrument to verify the accuracy of the information recorded. We summarized the findings and limitations of studies based on the completed data collection instruments, as well as areas for additional research identified in the studies. Additionally, we used this analysis to identify key, individual, epidemiological and laboratory studies. We also interviewed subject matter experts to determine what is known about the human health effects of RF energy exposure from mobile phone use. First, we identified 123 potential subject matter experts to interview through the following sources: (1) interviews with officials from federal agencies and representatives of academic institutions, consumer groups, and industry associations and (2) participant lists of recent expert panels and workgroups on this topic. These panels and workgroups included: The National Research Council’s Committee on Identification of Research Needs Relating to Potential Biological or Adverse Health Effects of Wireless Communications Devices, The International Agency for Research on Cancer’s (IARC) Monograph Working Group on RF electromagnetic fields, The INTERPHONE Study Group, The European Commission’s Scientific Committee on Emerging and Newly Identified Health Risks. Baan, R., et al, “Carcinogenicity of Radiofrequency Electromagnetic Fields,” Lancet Oncology, 2011, 12(7): 624-626. The INTERPHONE study is a retrospective case-control study that examined effects of mobile phone use on certain types of brain cancers or tumors in more than 5,000 cases aged 30-59 years in 13 countries. See Cardis, E, et al, “Brain Tumor Risk in Relation to Mobile Telephone Use: Results of the INTERPHONE International Case-Control Study,” International Journal of Epidemiology, 2010, 39: 675-694. European Commission, Health Effects of Exposure to EMF, 2009. least one source and we had information on their general area of expertise or (2) were identified through at least two sources regardless of whether we had information on their general area of expertise. We received responses from 42 experts agreeing to help us with our study. Based on these responses, we selected a judgmental sample of 11 experts who represented a range of expertise and professional backgrounds including public health and policy; biology and medicine; biostatistics; epidemiology; engineering, including bioelectrical engineering; and RF energy standards. (See table 3 for the list of individuals interviewed.) These experts were interviewed as individuals, not as representatives of any institution. Further, all of the experts completed a form stating that they had no conflicts of interest that would affect their ability to provide us with their perspectives on what is known about the human health effects of RF energy exposure from mobile phone use and related issues. To determine the current research activities of federal agencies and other organizations related to mobile phone use and health, we interviewed representatives from various agencies and organizations. We identified agencies and organizations by reviewing information on their websites on RF energy and conducting interviews with officials from federal agencies and representatives of organizations familiar with research on health effects of mobile phone use. To determine the current research activities of federal agencies related to mobile phone use and health, we interviewed officials from the Department of Defense; Department of Health and Human Services’ Centers for Disease Control and Prevention (CDC), Food and Drug Administration (FDA), and National Institutes of Health (NIH); Department of Labor’s Occupational Safety and Health Administration (OSHA); Environmental Protection Agency (EPA); and Federal Communications Commission (FCC). To determine the research activities of other organizations, we interviewed representatives from IARC, academic institutions, consumer groups, mobile phone industry associations, mobile phone manufacturers, and mobile phone providers. To determine how FCC set the RF energy exposure limit and ensures compliance with it, we reviewed and summarized FCC regulations and guidance as well as reports from international organizations that recommend RF energy exposure limits. We also reviewed and summarized FCC testing and certification regulations and guidance for mobile phones. We conducted interviews with officials from FCC and representatives from selected Telecommunication Certification Bodies (TCBs). We selected the four TCBs that approved the most mobile phone certification applications for fiscal years 2000-2011 according to FCC: PCTEST Engineering Laboratory, Inc.; ACB, Inc.; CETECOM ICT Services GmbH; and Timco Engineering, Inc. These four TCBs have approved 69 percent of all U.S. mobile phone applications since 2000. We interviewed representatives from National Institute of Standards and Technology, American National Standards Institute, and American Association for Laboratory Accreditation to discuss their role in accrediting entities as TCBs and monitoring the activities of current TCBs. We also conducted interviews with representatives of the mobile phone industry and consumer groups for their perspectives on RF energy exposure limits as well as the testing and certification of mobile phones. Representatives of the mobile phone industry we spoke with included industry associations (CTIA-The Wireless Association and Mobile Manufacturers Forum) as well as the top four mobile phone service providers (AT&T, Sprint, T-Mobile, and Verizon) that represent about 90 percent of U.S. mobile phone service subscribers. We also spoke with representatives from four mobile phone manufacturers that represent over 70 percent of the U.S. market (LG, Motorola, Nokia, and Samsung). To determine the actions federal agencies and the industry take to inform the public about issues related to mobile phone health effects, we reviewed the information on federal agency websites. We identified six federal agencies that have information about mobile phones and health- related issues on their websites: CDC, EPA, FCC, FDA, NIH, and OSHA. We conducted interviews with officials from those federal agencies to learn how they developed and update their websites. We spoke with representatives of the mobile phone industry noted above and consumer groups to obtain perspectives on the strengths and limitations of federal agency public-information-sharing efforts. We also spoke with the representatives of the mobile phone industry about how and why manufacturers include warnings or specific usage guidelines in their user manuals. Finally, we reviewed the user manuals of selected mobile phones (see table 4) to identify the usage and health information being provided to consumers, including any instructions to hold the mobile phone away from the body during use. The specific mobile phone models were identified by the manufacturers we spoke with as their top selling models in 2011. Ahlbom, Anders, Maria Feychting, Adele Green, Leeka Kheifets, David A. Savitz, and Anthony J. Swerdlow. “Epidemiological Evidence on Mobile Phones and Tumor Risk: A Review.” Epidemiology, vol. 20, no. 5 (2009): 639-652. Balbani, Aracy Pereira Silveira, and Jair Cortez Montovani. “Mobile Phones: Influence on Auditory and Vestibular Systems.” Brazilian Journal of Otorhinolaryngology, vol 74, no. 1 (2008): 125-131. Clapp, Richard W., Molly M. Jacobs, and Edward L. Loechler. “Environmental and Occupational Causes of Cancer: New Evidence 2005-2007.” Reviews on Environmental Health, vol. 23, no. 1 (2008): 1- 37. Committee on Man and Radiation. “COMAR Technical Information Statement: Expert Reviews on Potential Health Effects of Radiofrequency Electromagnetic Fields and Comments on the Bioinitiative Report.” Health Physics, vol. 97, no. 4 (2009): 348-356. Edumed Institute for Medicine and Health. Non-Ionizing Electromagnetic Radiation in the Radiofrequency Spectrum and its Effects on Human Health. Latin American Experts Committee on High Frequency Electromagnetic Fields and Human Health. June 2010. European Health Risk Assessment Network on Electromagnetic Fields Exposure. “Risk Analysis of Human Exposure to Electromagnetic Fields.” Deliverable Report D2, Executive Agency for Health and Consumers Framework of the Programme of Community Action in the Field Of Health 2008-2013. July 2010. European Health Risk Assessment Network on Electromagnetic Fields Exposure. “D3 – Report on the analysis of risks associated to exposure to EMF: in vitro and in vivo (animals) studies.” July 2010. French Environmental Health and Safety Agency. “AFSSE Statement on Mobile Phones and Health.” AFSSE. April 16, 2003. German Mobile Telecommunication Research Programme. “Health Risk Assessment of Mobile Communications.” Department Radiation Protection and Health. Germany: 2008. Habash, Riadh W.Y., J. Mark Elwood, Daniel Krewski, W. Gregory Lotz, James P. McNamee, and Frank S. Prato. “Recent Advances in Research On Radiofrequency Fields and Health: 2004-2007.” Journal of Toxicology and Environmental Health, Part B, vol. 12 (2009): 250-288. Han, Yueh-Ying, Hideyuki Kano, Devra L. Davis, Ajay Niranjan, and L. Dade Lunsford. “Cell Phone Use and Acoustic Neuroma: The Need for Standardized Questionnaires and Access to Industry Data.” Surgical Neurology, vol. 72 (2009): 216-222. Health Council of the Netherlands. “Electromagnetic Fields: Annual Update 2008.” The Hague: Health Council of the Netherlands, 2008; publication no. 2009/02. HERMO. Health Risk Assessment of Mobile Communications. A Finnish Research Programme. Finland: 2007. Institution of Engineering and Technology. “The Possible Harmful Biological Effects of Low-Level Electromagnetic Fields of Frequencies up to 300 GHz.” 2010 Position Statement, Institution of Engineering and Technology. United Kingdom: 2010. International Commission on Non-Ionizing Radiation Protection. Exposure to high frequency electromagnetic fields, biological effects and health consequences (100 kHz-300 GHz). Germany: 2009. Juutilainen, Jukka, Anne Höytö, Timo Kumlin, and Jonne Naarala. “Review of Possible Modulation-Dependent Biological Effects of Radiofrequency Fields.” Bioelectromagnetics, vol. 35 (2011): 511-534. Khurana, Vini G., Charles Teo, Michael Kundi, Lennart Hardell, and Michael Carlberg. “Cell Phones and Brain Tumors: A Review Including the Long-Term Epidemiological Data.” Surgical Neurology, vol. 72 (2009): 205-215. Kohli, D., A. Sachdev, and H. Vats. “Cell Phones and Tumor: Still In No Man’s Land.” Indian Journal of Cancer, vol. 46, no. 1 (2009): 5-12. Kundi, Michael. “The Controversy About a Possible Relationship Between Mobile Phone Use and Cancer.” Ciencia & Saude Coletiva, vol. 15, no. 5 (2010): 2415-2430. Levis, Angelo G., Nadia Minicuci, Paolo Ricci, Valerio Gennaro, and Spiridione Garbisa. “Mobile Phones and Head Tumours. The Discrepancies in Cause-Effect Relationships in the Epidemiological Studies – How Do They Arise?” Environmental Health, vol. 10, no. 59. (2011): 1-15. Marino, Andrew A., and Simona Carrubba. “The Effects of Mobile-Phone Electromagnetic Fields of Brain Electrical Activity: A Critical Analysis of the Literature.” Electromagnetic Biology and Medicine, vol. 28 (2009): 250-274. McKinlay, A.F., S.G. Allen, R. Cox, P.J. Dimbylow, S.M. Mann, C.R. Muirhead, R.D. Saunders, Z.J. Sienkiewicz, J.W. Stather, and P.R. Wainwright. “Advice on Limiting Exposure to Electromagnetic Fields (0- 300 GHz).” Documents of the NRPB, vol. 15, no. 2. National Radiological Protection Board, 2004. Mobile Telecommunications and Health Research Programme. “Report 2007.” United Kingdom: October 2007. Myung, Seung-Kwon, Woong Ju, Diana D. McDonnell, Yeon Ji Lee, Gene Kazinets, Chih-Tao Cheng, and Joel M. Moskowitz. “Mobile Phone Use and Risk of Tumors: A Meta-Analysis” Journal of Clinical Oncology, vol. 27, no. 33 (2009): 5565-5572. National Council on Radiation Protection and Measurements. “Biological Effects of Modulated Radiofrequency Fields.” NCRP Commentary, no. 18. Bethesda, MD: 2003. National Research Council. Identification of Research Needs Relating to Potential Biological or Adverse Health Effects of Wireless Communication. Washington, D.C.: 2008. Nieden, Anja zur, Corrina Dietz, Thomas Eikmann, Jürgen Kiefer, and Caroline E.W. Herr. “Physicians Appeals On the Dangers of Mobile Communication – What Is the Evidence? Assessment of Public Health Data.” International Journal of Hygiene and Environmental Health, vol. 212 (2009): 576-587. Pourlis, Aris F., “Reproductive and Developmental Effects of EMF in Vertebrate Animal Models.” Pathophysiology, vol. 16 (2009): 179-189. Regel, Sabine J., and Peter Achermann. “Cognitive Performance Measures in Bioelectromagnetic Research – Critical Evaluation and Recommendations.” Environmental Health, vol. 10, no. 10 (2011). Röösli, Martin, and Kerstin Hug. “Wireless Communication Fields and Non-Specific Symptoms of Ill Health: A Literature Review.” Weiner Medizinische Wochenschrift, vol. 161, no. 9-10 (2011): 240-250. Scientific Committee on Emerging and Newly Identified Health Risks. Health Effects of Exposure to EMF. European Commission. January 19, 2009. Sixth Framework Programme. “EMF-NET: Effects of the Exposure to Electromagnetic Fields: From Science to Public Health and Safer Workplace.” WP2.2 Deliverable report D4bis: Effects on Reproduction and Development. Italy: 2007. Swedish Radiation Protection Authority. Recent Research on EMF and Health Risks Fifth Annual Report from SSI:s Independent Expert Group on Electromagnetic Fields, 2007, Sweden: 2008. Swedish Radiation Safety Authority. Recent Research on EMF and Health Risk Seventh Annual Report from SSM:s Independent Expert Group on Electromagnetic Fields, 2010. Sweden: 2010. Valentini, E., G. Curcio, F. Moroni, M. Ferrara, L. De Gannaro, and M. Bertini. “Neurophysiological Effects of Mobile Phone Electromagnetic Fields on Humans: A Comprehensive Review.” Bioelectromagnetics, vol. 28 (2007): 415-432. Vanderstraeten, Jacques, and Luc Verschaeve. “Gene and Protein Expression Following Exposure to Radiofrequency Fields from Mobile Phones.” Environmental Health Perspectives, vol. 116, no. 9 (2008): 1131-1135. Vijayalaxmi, and Thomas J. Prihoda. “Genetic Damage in Mammalian Somatic Cells Exposed to Radiofrequency Radiation: A Meta-Analysis of Data from 63 Publications (1990-2005).” Radiation Research, vol.169 (2008): 561-574. World Health Organization. WHO Research Agenda for Radiofrequency Fields. Switzerland: 2010. In addition to the contacts named above, Janina Austin and Teresa Spisak, Assistant Directors, as well as Kyle Browning, Owen Bruce, Marquita Campbell, Leia Dickerson, Kristin Ekelund, Lorraine Ettaro, Colin Fallon, David Hooper, Rosa Leung, and Maria Stattel made key contributions to this report.
The rapid adoption of mobile phones has occurred amidst controversy over whether the technology poses a risk to human health as a result of long-term exposure to RF energy from mobile phone use. FCC and FDA share regulatory responsibilities for mobile phones. GAO was asked to examine several issues related to mobile phone health effects and regulation. Specifically, this report addresses (1) what is known about the health effects of RF energy from mobile phones and what are current research activities, (2) how FCC set the RF energy exposure limit for mobile phones, and (3) federal agency and industry actions to inform the public about health issues related to mobile phones, among other things. GAO reviewed scientific research; interviewed experts in fields such as public health and engineering, officials from federal agencies, and representatives of academic institutions, consumer groups, and the mobile phone industry; reviewed mobile phone testing and certification regulations and guidance; and reviewed relevant federal agency websites and mobile phone user manuals. Scientific research to date has not demonstrated adverse human health effects of exposure to radio-frequency (RF) energy from mobile phone use, but research is ongoing that may increase understanding of any possible effects. In addition, officials from the Food and Drug Administration (FDA) and the National Institutes of Health (NIH) as well as experts GAO interviewed have reached similar conclusions about the scientific research. Ongoing research examining the health effects of RF energy exposure is funded and supported by federal agencies, international organizations, and the mobile phone industry. NIH is the only federal agency GAO interviewed directly funding studies in this area, but other agencies support research under way by collaborating with NIH or other organizations to conduct studies and identify areas for additional research. The Federal Communications Commission’s (FCC) RF energy exposure limit may not reflect the latest research, and testing requirements may not identify maximum exposure in all possible usage conditions. FCC set an RF energy exposure limit for mobile phones in 1996, based on recommendations from federal health and safety agencies and international organizations. These international organizations have updated their exposure limit recommendation in recent years, based on new research, and this new limit has been widely adopted by other countries, including countries in the European Union. This new recommended limit could allow for more RF energy exposure, but actual exposure depends on a number of factors including how the phone is held during use. FCC has not adopted the new recommended limit. The Office of Management and Budget’s instructions to federal agencies require the adoption of consensus standards when possible. FCC told GAO that it relies on the guidance of federal health and safety agencies when determining the RF energy exposure limit, and to date, none of these agencies have advised FCC to change the limit. However, FCC has not formally asked these agencies for a reassessment. By not formally reassessing its current limit, FCC cannot ensure it is using a limit that reflects the latest research on RF energy exposure. FCC has also not reassessed its testing requirements to ensure that they identify the maximum RF energy exposure a user could experience. Some consumers may use mobile phones against the body, which FCC does not currently test, and could result in RF energy exposure higher than the FCC limit. Federal agencies and the mobile phone industry provide information on the health effects of mobile phone use and related issues to the public through their websites and mobile phone manuals. The types of information provided via federal agencies’ websites on mobile phone health effects and related issues vary, in part because of the agencies’ different missions, although agencies provide a broadly consistent message. Members of the mobile phone industry voluntarily provide information on their websites and in mobile-phone user manuals. There are no federal requirements that manufacturers provide information to consumers about the health effects of mobile phone use. FCC should formally reassess and, if appropriate, change its current RF energy exposure limit and mobile phone testing requirements related to likely usage configurations, particularly when phones are held against the body. FCC noted that a draft document currently under consideration by FCC has the potential to address GAO’s recommendations
In 1964, Congress passed the Urban Mass Transportation Act to provide financial assistance to states and local governments to extend and improve urban mass transportation systems beleaguered by rising costs and declining ridership. The provisions commonly called Section 13(c) were included to protect employees who might be adversely affected by industry changes resulting from financial assistance under the act. One specific concern was that if municipalities and other public entities used federal assistance to purchase failing private transportation providers, the employees could lose their jobs, collective bargaining rights, or other rights they had gained through collective bargaining. For example, prior to the passage of the act, transit employees in Dade County, Florida lost their collective bargaining rights; and subsequent decisions regarding wages, hours, and working conditions were made unilaterally after their employer was acquired by a public transit authority. Another concern leading to Section 13(c) was that technological advances made with federal assistance would reduce the need for transit labor. Section 13(c) is unusual in that two federal agencies administer it: DOT and DOL. Section 13(c) requires that DOL certify that fair and equitable labor protection arrangements are in place before DOT makes grants to transit applicants. Such labor protection arrangements are to provide for (1) the preservation of rights, privileges, and benefits under existing collective bargaining agreements; (2) continuation of collective bargaining rights; (3) protection of employees against a worsening of their positions with respect to their employment; (4) assurances of employment to employees of acquired mass transportation systems and priority of reemployment for employees terminated or laid off; and (5) paid training or retraining programs. In carrying out its responsibilities, DOL ensures that the protective terms are in place through Section 13(c) arrangements, which are incorporated within a transit agency’s grant agreement with DOT. The DOL certification process begins when FTA forwards a grant application to DOL. DOL refers the grant application and recommended terms and conditions to the unions representing transit employees in the service area of the project and the transit agency applying for the grant. No referral is made when (1) employees in the service area are not represented by a union or (2) the grant is for routine replacement items. After DOL referral, the parties review the proposed terms and conditions and submit objections, if any. If no objection is submitted, DOL issues a final certification based on the terms and conditions recommended to the parties. If an objection is submitted, DOL considers its validity. If DOL determines that the objection is not valid, it issues a certification that is based on the recommended terms and conditions. If it determines that the objection is valid, and it cannot be resolved through a technical correction, the parties are provided an opportunity to resolve disputed matters through negotiations. If they are unable to do so, DOL makes a determination after considering the objections of the parties. If the terms and conditions applied by DOL are not acceptable to the grant applicant, it may choose not to accept federal transit assistance. After the certification process, employees who believe they have been adversely affected as a result of federal transit assistance may file claims under the procedures set forth in the Section 13(c) arrangements certified by DOL. The procedures for filing and resolving Section 13(c) claims vary according to each agreement, but they typically set forth (1) a time period for filing claims; (2) an informal process under which the parties can resolve disputes over claims; and (3) a formal dispute resolution process, such as binding arbitration, in the event that an informal settlement is not reached. The agencies we surveyed generally reported Section 13(c) had a minimal impact on their labor costs. Critics have expressed concern that Section 13(c) hinders transit agencies’ ability to lower labor costs, which, according to the American Public Transportation Association, account for over 80 percent of the operating expenses in the public transportation industry. In addition, some critics have stated that Section 13(c) has caused inflated wages and benefits in the transit industry. However, 68 percent of the transit agencies that responded to our survey reported that, in general, Section 13(c) had no effect on their labor costs. Twenty-seven percent reported that Section 13(c) had somewhat increased their labor costs, and 4 percent reported that Section 13(c) had greatly increased labor costs. Moreover, a study by Rutgers University found that hourly transit wages for operators and mechanics rose very little in real terms and substantially less than average earnings per employee in other sectors of the economy from 1982 to 1997. When compared within metropolitan areas, transit wages (1) rose less than average earnings per employee in the manufacturing and government sectors, (2) were about the same as average earnings per employee in the transportation and public utilities sectors, (3) and rose much less than average earnings per employee in all other sectors of the economy. In addition, using data from 130 transit agencies, the Rutgers study found that mean top-wages for transit bus operators hovered at roughly the same level over the 1982 to 1997 period. (See table 1.) Section 13(c) could also affect a transit agency’s costs through Section 13(c) claims. Section 13(c) arrangements typically establish a process whereby employees adversely affected by federal assistance can file claims against transit agencies, for example, for a dismissal allowance when employees lose their jobs. Claims may be filed for an individual or for a group of employees and claims filed in the last 5 years covered an average of 37 employees per claim. Eighty-seven percent of the transit agencies we surveyed reported that they have not had any Section 13(c) claims in the last 5 years. The remaining 12 agencies had an average of 3 claims filed during this period. Only eight of these agencies had Section 13(c) claims reach settlement, arbitration, or DOL decision. For those agencies, the average total amount paid per agency was $188,067. Critics of Section 13(c) have asserted that it creates disincentives for transit agencies to examine and adopt innovative technologies. However, 85 percent of the transit agencies surveyed reported that, in general, Section 13(c) did not affect their decisions on whether to adopt new technologies. In addition, we asked the transit agencies whether Section 13(c) had influenced their decisions whether to adopt specific technologies, including automatic passenger counters and electronic fare payment systems. For each technology we identified, few or none of the transit agencies surveyed indicated that Section 13(c) had influenced their decisions on whether to adopt the technology. For example, of the 65 agencies that had considered adopting onboard electronic security monitors, 4 indicated that Section 13(c) influenced their decisions whether to adopt that technology. Only 1 of the 30 agencies that had considered using articulated buses indicated that Section 13(c) influenced their decisions whether to adopt that technology. Of the 71 agencies that considered using a global positioning system, 2 indicated that Section 13(c) influenced their decision whether to adopt that technology; however, officials who identified Section 13(c) as influencing their decisions did not offer any explanation for why Section 13(c) proved problematic in these cases. (See fig. 1.) In addition, some officials suggested that the impact of Section 13(c) on transit agencies’ decisions concerning technology may be limited because many transit systems are experiencing growth. According to the American Public Transportation Association (APTA), over the past 5 years, transit ridership has grown 21 percent. Consequently, transit agencies may be able to adopt labor-saving technologies without dismissing or displacing employees. Of the transit agencies we surveyed, 10 percent reported that they have fewer employees now than 5 years ago; and 84 percent reported that they have more employees now than 5 years ago. The remaining transit agencies reported no change in the number of employees. The transit agencies we surveyed generally reported that Section 13(c) had a minimal impact in some selected areas of their operations, including their decisions to modify transit operations and their relations with their unions. However, the agencies reported a greater impact due to Section 13(c) on their ability to contract for transit services. Critics of Section 13(c) have stated that it creates disincentives for transit agencies to modify their operations. However, transit agencies generally reported that Section 13(c) did not influence their decisions in this area. We asked transit agencies to identify which of nine operational areas they had considered modifying, and whether Section 13(c) had influenced their decisions on whether to implement changes. When agencies indicated that they had considered changes in their transit operations, on average 81 percent of those decisions were not influenced by Section 13(c). (See fig. 2.) The majority of transit agencies we surveyed also reported that Section 13(c) generally did not affect their relations with the unions that represented their employees. For example, when asked whether Section 13(c) had caused their relations with the unions to become more amicable or more contentious, 63 percent of the agencies reported that Section 13(c) had not had any effect. Thirty-four percent of the agencies reported that Section 13(c) had made relations with the unions more contentious, and the remaining 3 percent reported that Section 13(c) had made their labor relations more amicable. In addition, 85 percent of the transit agencies we surveyed reported that they would be required to engage in collective bargaining independent of Section 13(c) and its requirements to continue collective bargaining rights. Some collective bargaining agreements contain provisions similar to those found in Section 13(c) arrangements and thus make isolating the impact of Section 13(c) difficult. Although Section 13(c) had a minimal impact on most areas of transit operations we identified, many transit agencies we surveyed indicated that it had affected their ability to contract for fixed-route transit services. For example, 46 percent indicated that Section 13(c) made it somewhat or much more difficult to contract out for fixed-route services. In contrast, 17 percent of the transit agencies indicated that Section 13(c) made it somewhat or much more difficult to contract out for paratransit services. A transit official we interviewed explained this difference by noting that employees represented by labor unions have historically operated fixed- route service, but paratransit services have historically been contracted out; thus, the continuation of contracting out paratransit services does not pose a problem. In addition, some transit industry officials reported that although provisions of Section 13(c) arrangements may directly limit contracting out for services, more often agencies are discouraged from contracting out because of their perception that such action will cause problems, such as Section 13(c) claims or delays in the receipt of grants. Transit agencies have argued that Section 13(c) causes delays in application processing and the award of federal grants. As noted in our August 2000 report, when transit applications are not processed in a timely manner, transit benefits are delayed. In addition, a lack of predictability and consistency in processing times can make planning and project execution difficult for transit agencies. As we detailed previously, 93 percent of DOL’s applications processed from January 1996 through April 2000 met DOL’s internal 60-day processing goal. As we noted in the August 2000 report, because of inconsistencies in DOL and FTA databases we were unable to determine whether Section 13(c) labor certification requirements delayed the award of transit grants. However, 57 percent of the transit agencies we surveyed indicated that Section 13(c) had caused such delays. Although a majority of the transit agencies indicated that Section 13(c) had caused delays, the transit agencies were generally satisfied with the processing of federal transit grant applications. Forty-eight percent of the transit agencies were either somewhat or very satisfied with the timeliness of FTA’s grant processing, where 24 percent were either somewhat or very dissatisfied. The remaining were neither satisfied nor dissatisfied. Forty- two percent were either somewhat or very satisfied with the timeliness of DOL’s grant processing, and 29 percent were either somewhat or very dissatisfied with the timeliness of DOL’s grant processing. The remaining were neither satisfied nor dissatisfied. Some of the transit agencies we surveyed indicated that Section 13(c) requirements for receiving financial assistance were a burden regarding time, effort, and resources. However, more agencies identified other requirements as burdensome. All of the transit agencies indicated that FTA and DOL could undertake some actions to ease the burden of fulfilling Section 13(c) requirements, such as providing information on best practices in transit agencies and providing information about Section 13(c) on FTA and DOL Web sites. DOL has advised us that compliance information on the Section 13(c) program is included on its Web site, as well as on the FTA Web site. The transit agencies were presented with a list of 10 different federal requirements for receiving federal transit assistance and asked to indicate how easy or difficult it was to fulfill those requirements regarding time, effort, and resources. Thirty percent of the transit agencies we surveyed indicated that fulfilling Section 13(c) requirements was either somewhat or very difficult. Fifty-six percent indicated that fulfilling Section 13(c) requirements was neither easy nor difficult, and 14 percent indicated that fulfilling the requirements was somewhat or very easy. More transit agencies we surveyed indicated that other federal requirements were burdensome. For example, 79 percent of the transit agencies indicated that complying with Americans with Disabilities Act requirements was somewhat or very difficult. Seventy-four percent of the transit agencies indicated that fulfilling Disadvantaged Business Enterprise program requirements was somewhat or very difficult. Figure 3 shows the percentage of agencies that indicated that the requirements were difficult to fulfill. In the returned questionnaires, we observed that transit agencies’ responses concerning difficulties with contracting, delays in the receipt of federal grants, or fulfilling Section 13(c) requirements did not show any pattern regarding agency size, structure, or location. Our survey respondents were provided a list of actions that FTA and DOL could undertake to help transit agencies with Section 13(c), and agencies were asked to indicate whether the actions would be useful or not useful. More than 50 percent of the transit agencies indicated that each of the nine actions listed would be definitely or probably useful. For example, 86 percent of the transit agencies we surveyed indicated that it would definitely or probably be useful if FTA were to provide information on delays in application processing. Similarly, 85 percent of the transit agencies we surveyed indicated that it would be definitely or probably useful if DOL were to provide reasons for delays in processing an application. Eighty percent indicated that it would be definitely or probably useful if DOL and FTA were to provide information about Section 13(c) on their Web sites. Figure 4 shows the percentage of agencies that indicated that the actions would be useful. The transit agencies we surveyed generally reported that Section 13(c) has had a minimal impact on labor costs, adoption of technologies, and operations. However, a notable number of transit agencies reported that Section 13(c) has discouraged them from contracting for fixed-route transit services and has delayed their receipt of federal grants. In addition, although 30 percent of the transit agencies indicated that Section 13(c) is a burden on their time, efforts, and resources, more transit agencies indicated that certain other federal transit requirements were burdensome. In the returned questionnaires, we observed that transit agencies’ responses concerning difficulties with contracting, delays in the receipt of federal grants, or fulfilling Section 13(c) requirements did not show any pattern regarding agency size, structure, or location. Two factors are relevant to understanding the impact of Section 13(c) on transit agencies. First, 85 percent of the transit agencies we surveyed reported that they would be required to engage in collective bargaining independent of Section 13(c) and its requirements to continue collective bargaining rights. Some collective bargaining agreements contain provisions similar to those found in Section 13(c) arrangements and thus make isolating the impact of Section 13(c) difficult. Second, 84 percent of the transit agencies reported that they have more employees now than 5 years ago. Officials we interviewed suggested that the growth of many transit agencies has reduced or eliminated the need to dismiss or displace employees when making technological or operational changes, thus potentially reducing the concern over the implications of such changes under Section 13(c). Finally, the transit agencies indicated that some actions FTA and DOL could take, such as providing information about Section 13(c) on their Web sites and providing additional information about processing delays, would be helpful in fulfilling Section 13(c) requirements. We provided a draft of this report to the Secretary of Transportation and the Secretary of Labor. Neither agency had substantive comments; however, both provided technical comments that we incorporated into this report as appropriate. To determine the impact of Section 13(c), we reviewed relevant studies, interviewed federal agency and union officials, and surveyed the 105 largest transit agencies. To obtain background information on Section 13(c), we reviewed the legislative history of the Urban Mass Transportation Act of 1964 and interviewed officials at the APTA, FTA, DOL’s Employment Standards Administration, the Amalgamated Transit Union, and the Transportation Workers Union. These officials shared their views on the costs and benefits of Section 13(c) as well as key information on the Section 13(c) certification process, the characteristics of transit agencies most likely to be affected by Section 13(c), and the history of Section 13(c). To obtain the list of transit agencies to survey, we analyzed data from FTA’s National Transit Database (NTD). From our interviews, we determined that larger transit authorities were more likely to have had relevant and reportable experiences with Section 13(c). First, larger transit agencies generally receive more federal financial assistance than smaller agencies. Second, the officials we interviewed reported that larger agencies were more likely to have employees represented by unions. Finally, DOL has simplified certification requirements for transit authorities not located in urbanized areas. Consequently, we requested that FTA officials provide us with a list of all transit providers that serve populations greater than 200,000 and that annually operate 100 or more revenue vehicles in maximum service. In commenting on a draft of this report, FTA noted that smaller and nontraditional grantees that were not included in our list may also experience some difficulties in complying with Section 13(c). We used the NTD, Internet searches, and telephone calls to exclude the following from our initial list: (1) transit agencies operating as subsidiaries of other transit agencies on our list, (2) transit agencies not receiving federal financial assistance, and (3) private organizations that provide purchased transit services to transit agencies already on our list. We also added to our list transit agencies that met our criteria but were not included in the list provided from FTA because they had not filed with the NTD. Our final mailing list contained 105 transit agencies. After we developed the list of 105 transit agencies to survey and developed a preliminary questionnaire, we pretested the survey with officials from nine transit agencies. The pretest participants were selected from transit agencies of different sizes operating in a variety of geographic areas. During the pretesting, we simulated the actual survey experience by asking the transit agency officials to complete the survey. We then interviewed the officials after they had completed the survey to ensure that (1) the questions were understandable and clear, (2) the terms used were precise, (3) the survey did not place an undue burden on agency officials, and (4) the survey was unbiased. On the basis of the pretesting, we incorporated appropriate changes into the final questionnaire. After mailing the questionnaire in April 2001, we sent three additional reminders in order to increase our response rate. First, we sent a postcard 1 week after the survey. Second, we sent a follow-up letter and a replacement questionnaire to nonrespondents 1 month after the initial mailing. Finally, we sent E-mail messages and placed telephone calls to nonrespondents during June and July 2001. We received questionnaires from 92 transit agencies, for a response rate of 88 percent. We performed our review from December 2000 through October 2001 in accordance with generally accepted government auditing standards. As arranged with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after the date of this report. At that time, we will send copies to the Secretary of Transportation, the Secretary of Labor, and interested congressional committees. Copies will also be made available to others on request. If you have any questions about this report, please call me at (202) 512-2834 or contact me at heckerj@gao.gov. Major contributors to this report are listed in appendix II. In addition to those named above, the following staff members made key contributions to this report: Casey Brown, Helen Desaulniers, Curtis Groves, Lynn Musser, and Yvonne Pufahl. The General Accounting Office, the investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full-text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO E-mail this list to you every afternoon, go to our home page and complete the easy-to-use electronic order form found under “To Order GAO Products.” Web site: www.gao.gov/fraudnet/fraudnet.htm, E-mail: fraudnet@gao.gov, or 1-800-424-5454 (automated answering system).
Concerns have arisen about the 37-year-old statutory provision commonly known as Section 13(c). Before the Federal Transit Administration (FTA) may make grants to transit applicants, the Department of Labor (DOL) must certify that fair and equitable arrangements are in place to protect mass transit employees affected. Section 13(c) requires that the arrangements provide for continued of collective bargaining rights and protect of employees against a worsening of their positions. Once certified, the arrangements are incorporated into the grant agreement between FTA and the grantee. Critics claim that Section 13(e) greatly increases the cost of transit operations, hinders transit agencies' efforts to adopt new technology, and constrains the efficient operation of transit systems. Supporters counter that Section 13(c) has enhanced labor-management stability and has improved communication and working relationships between management and labor. The transit agencies GAO surveyed reported that Section 13(c) had a minimal impact on their (1) labor costs, (2) ability to adopt new technologies, and (3) ability to modify transit operations. Transit agencies reported that Section 13(c) has delayed the award of federal grants and has presented a burden regarding time, efforts, and resources. Transit officials said that growth in the transit industry may mitigate the effects of Section 13(c).
The FCS concept is designed to be part of the Army’s Future Force, which is intended to transform the Army into a more rapidly deployable and responsive force that differs substantially from the large division-centric structure of the past. The Army is reorganizing its current forces into modular brigade combat teams, each of which is expected to be highly survivable and the most lethal brigade-sized unit the Army has ever fielded. The Army expects FCS-equipped brigade combat teams to provide significant warfighting capabilities to the Department of Defense’s (DOD) overall joint military operations. Since being approved for development in 2003, the program has gone through several restructures and modifications. In 2004, the program re- introduced four systems that had been deferred, lengthened the development and production schedules, and instituted plans to spin out selected FCS technologies and systems to current Army forces throughout the program’s development phase. In 2006, the Army again deferred four systems, among other changes. In 2008, the Army altered its efforts to spin out capabilities to current forces from heavy brigade combat teams to infantry brigade combat teams. The FCS program began in May 2003 before the Army defined what the systems were going to be required to do and how they would interact. The Army moved ahead without determining whether the concept could be successfully developed with existing resources—without proven technologies, a stable design, and available funding and time. The Army projects the FCS program will cost $159 billion, not including all the costs to the Army, such as complementary programs. The Army is also using a unique partner-like arrangement with a lead system integrator (LSI), Boeing, to manage and produce the FCS. For these and other reasons, the FCS program is recognized as being high risk and requiring special oversight. Accordingly, in 2006, Congress mandated that DOD hold a milestone review following the FCS preliminary design review. Congress directed that the review include an assessment of whether (1) the needs are valid and can best be met with the FCS concept, (2) the FCS program can be developed within existing resources, and (3) the program should continue as currently structured, be restructured, or be terminated. Congress required the Secretary of Defense to assess the program against specific criteria, including the maturity of critical technologies, program risks, demonstrations of the FCS concept and software, and a cost estimate and affordability assessment, and to report on findings by the time of the milestone review. This statement is based on work we conducted between March 2008 and March 2009 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Assessed against the criteria to be used for the milestone review, the FCS program has significant knowledge gaps. Specifically, the program has yet to show that critical technologies are mature, design issues have been resolved, requirements and resources are matched, performance has been demonstrated versus simulated, and costs are affordable. The Army will be challenged to convincingly demonstrate the knowledge necessary to warrant an unqualified commitment to FCS at the 2009 milestone review. While best practices and DOD policy preference are for each of a program’s critical technologies to achieve a technology readiness level (TRL) of 7 prior to entering development, the Army is struggling to achieve a TRL 6, the level required for the milestone review, after almost 6 years of development. Although the Army projects that TRL 6 will be achieved by the time of the review, the Army will be challenged to do so. Dates for several key demonstrations have slipped, and several ratings have yet to be validated by independent reviewers. Furthermore, the Army’s experience with maturing FCS technologies does not inspire confidence that it will be able to execute the fast-paced integration plans involved with advancing technologies to TRL 7 before the production decision in 2013. Design knowledge expected to be available at the time of the milestone review may not provide the necessary confidence that FCS design risks are at acceptable levels. The Army continues to set and refine requirements in order to establish system designs, particularly at the system level. Although the Army plans to have completed all system-level preliminary design reviews before the milestone review, the schedule to close out all the reviews may take more time than anticipated, key risk items will have to be addressed, and design trade-offs will be necessary. For example, the projected weight of the FCS manned ground vehicles has increased, which could have a number of effects on vehicle performance. In the coming months, the Army will have to address these and other design and requirements conflicts. It is important to note that DOD’s updated acquisition policy calls for holding preliminary design review at or near the time of the decision to begin development, which in the case of FCS was in 2003. The Army will be challenged to meet the congressional direction to demonstrate—versus simulate–that the FCS warfighting concept will work by the time of the milestone review. At this time, limited demonstrations of select capabilities, including manned ground vehicles and software, have been conducted, but no meaningful demonstration that the FCS concept as a whole will work has been attempted. A thorough demonstration of the FCS network, the linchpin of the FCS concept, will not be attempted until 2012. There have been some demonstrations of early versions of the lightweight armor and an active protection system, but the feasibility of the FCS survivability concept remains uncertain. The Army is expected to update its cost estimate, currently $159 billion, for the milestone review. Last year, the Army indicated its notional plans to increase estimates by about $19 billion, but has not said whether it would have to trade off capabilities to accommodate the higher costs. The Army has also indicated its willingness to reduce funding to current force systems in favor of FCS. While the updated program cost estimate will be a better representation of actual costs than previous estimates, the program still has many risks and unprecedented challenges to meet, and thus, the estimate will likely change again as more knowledge is acquired. At the milestone review, DOD will have to evaluate at least three programmatic options to shape investments in combat systems for the Army, each of which presents challenges. The first involves the FCS program, which, as currently structured, has significant risks for execution. Second, the decision to produce spin out systems to current forces is expected to occur before full testing of production-representative prototypes. Third, the Army is considering altering the FCS strategy to follow an incremental approach, which is preferable to the current approach, but presents other challenges. The FCS acquisition strategy is unlikely to be executable within current cost and schedule projections, given the significant amount of development and demonstration yet to be completed. The timing of upcoming commitments to production funding puts decision makers in the difficult position of making production commitments without knowing if FCS will work as intended. Under the current acquisition strategy, FCS decisions are not knowledge-based, nor do they facilitate oversight. For example, the Army has scheduled only 2 years between the critical design review and the production decision in 2013, leaving little time to gain knowledge between the two events. As a result, FCS will rely on immature prototypes for making the decision to proceed into production. Also, if the program receives approval to proceed at the milestone review this year, the Army will have only 40 percent of its financial and schedule resources left to complete what is typically the most challenging and expensive development work ahead, as depicted in figure 1 below. Historical experience and recent independent cost estimates on FCS suggest that costs will grow beyond the Army’s estimates. Our previous work has shown the development costs for programs with mature technologies increased by a modest average of 4.8 percent over the first full estimate, whereas the development costs for programs with immature technologies increased by a much higher average of 34.9 percent. Similarly, program acquisition unit costs for the programs with the most mature technologies increased by less than 1 percent, whereas the programs that started development with immature technologies experienced an average program acquisition unit cost increase of nearly 27 percent over the first full estimate. Our work also showed that most development cost growth occurred after the critical design review. Specifically, of the 28.3 percent cost growth that weapon systems average in development, 19.7 percent occurs after the critical design review. Under the current strategy, the Army’s plans for funding core production efforts put congressional decision makers in a difficult position in a number of ways. Facilitization costs begin in fiscal year 2011, the budget for which will be presented to Congress in February 2010, several months after the milestone review and prior to the critical design review. In fact, there could still be action items from the preliminary design review to complete at that time. Further, when Congress is asked to approve funding for initial low-rate production of core FCS systems, the Army will not yet have proven that the FCS network and the program concept will work, a demonstration that is expected as part of Limited User Test 3 in 2012. This situation is illustrated further in figure 2 below. Significant production funds will also be spent on the Non-Line-of-Sight Cannon and spin out systems between now and the FCS core production decision in 2013. To meet congressionally required fielding dates, the Army began building Non-Line-of-Sight Cannon prototypes last year, but has encountered some setbacks due to development issues and delays. The vehicles are planned to be used as training assets and will not be fieldable systems. The Army is planning for a seamless transition between these prototypes and production of the core FCS systems, but given the financial investment from the Army and consequently, the energized industrial base, this could create pressure to proceed into core production prior to achieving a solid level of knowledge on which to move forward. Currently, the Army’s efforts to field spin out systems relies on a rushed schedule that calls for making production decisions before production- representative prototypes have clearly demonstrated a useful military capability. A shift in focus on the Army’s efforts to spin out capabilities to current forces from heavy brigade combat teams to infantry brigade combat teams resulted in moving the production decision from January 2009 to December 2009. However, only one key test has been conducted under the new structure, and this event was a shortened version of an event that was originally planned to focus on the heavy brigade combat team. Additionally, testing completed to date has involved surrogate or non-production representative forms of systems, and the three tests scheduled for this year will follow the same practice. Army officials have said that they are considering an incremental or block acquisition approach to FCS in order to mitigate risks in four major areas: (1) immaturity of requirements for system survivability, network capability, and information assurance; (2) limited availability of performance trade space to maintain program cost and schedule given current program risks; (3) program not funded to Cost Analysis Improvement Group estimates and effect of congressional budget cuts; and (4) continuing challenges in aligning schedules and expectations for multiple concurrent acquisitions. Restructuring the FCS program around an incremental approach has the potential to alleviate the risks inherent in the current strategy and is an opportunity to apply recent DOD policy updates, such as the creation of configuration steering boards, and provide decision-makers with more information before program commitments are made. On the other hand, an incremental approach entails its own oversight challenges. First, it presents decision makers with another FCS strategy to consider, possibly after the fiscal year 2010 budget is submitted. Second, the approach must ensure that each increment stands on its own and is not dependent on future increments. As DOD considers the current strategy, an incremental strategy, and its production commitments, it will also have to continue to pay close attention to the role being played by the FCS lead system integrator. We have previously reported that the role of the integrator posed oversight challenges. Since then, the Army has committed to using the integrator for initial production, potentially a larger role than initially envisioned. The 2009 milestone review is the most important decision on the Future Combat System since the program began in 2003. If the preliminary design reviews are successfully completed and critical technologies mature as planned in 2009, the FCS program will essentially be at a stage that statute and DOD policy would consider as being ready to start development. In this sense, the 2009 review will complete the evaluative process that began with the original 2003 milestone decision. Furthermore, when considering that the current estimate for FCS ranges from $159 billion to $200 billion when the potential increases to core program costs and estimated costs of spin outs are included, 90 percent or more of the investment in the program lies ahead. Even if a new, incremental approach to FCS is approved, a full milestone review that carries the responsibility of a go/no- go decision is still in order, along with attendant reports and analyses that are required inputs. In the meantime, a configuration steering board, as required by DOD policy, may help bridge the gaps between requirements and system designs and help in the timely completion of the FCS preliminary design reviews. There is no question that the Army needs to ensure its forces are well equipped. The Army has vigorously pursued FCS as the solution, a concept and an approach that is unconventional, yet with many good features. The difficulties and redirections experienced by the program should be seen as revealing its immaturity, rather than as the basis for criticism. However, at this point, enough time and money have been expended that the program should be evaluated at the 2009 milestone review based on what it has shown, not on what it could show. The Army should not pursue FCS at any cost, nor should it settle for whatever the FCS program produces under fixed resources. Rather, the program direction taken after the milestone review must strike a balance between near-term and long-term needs, realistic funding expectations, and a sound plan for execution. Regarding execution, the review represents an opportunity to ensure that the emerging investment program be put on the soundest possible footing by applying the best standards available, like those contained in DOD’s 2008 acquisition policy, and requiring clear demonstrations of the FCS concept and network before any commitment to production of core FCS systems. Any decision the Army makes to change the FCS program is likely to lag behind the congressional schedule for authorizing and appropriating fiscal year 2010 funds. Therefore, Congress needs to preserve its options for ensuring it has adequate knowledge on which to base funding decisions. Specifically, it does not seem reasonable to expect Congress to provide full fiscal year 2010 funding for the program before the milestone review is held nor production funding before system designs are stable and validated in testing. In our report released March 12, 2009, we raised several matters for congressional consideration. We suggested Congress consider restricting budget authority for fiscal year 2010 until DOD fully complies with the milestone review requirements and provides a complete budget justification package for any program that emerges. In addition, Congress could consider not approving production or long lead item funds for core FCS until after the critical design review is satisfactorily completed and demonstrations have provided confidence that the FCS system-of-systems operating with the communications network will be able to meet its requirements. We also made several recommendations to the Secretary of Defense including ensuring that the FCS program that emerges from the milestone review conform with current DOD acquisition policy and directing the Secretary of the Army to convene an FCS configuration steering board. We recommended that if an incremental approach is selected for FCS, the first increments should be justifiable on their own as worthwhile military capabilities that are not dependent on future capabilities for their value. We further recommended that spin out items are fully tested in production representative form before they are approved for initial production. Finally, we recommended that the Secretary reassess the role of the lead system integrator, particularly with respect to any future role in production efforts. Mr. Chairman, this concludes my prepared statement. I would be happy to answer any questions you or members of the subcommittee may have. For future questions about this statement, please contact me on (202) 512-4841 or francisp@gao.gov. Individuals making key contributions to this statement include William R. Graveline, Assistant Director; William C. Allbritton; Noah B. Bleicher; Tana M. Davis; Marcus C. Ferguson; Carrie W. Rogers; and Robert S. Swierczek. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The Future Combat System (FCS) program--which comprises 14 integrated weapon systems and an advanced information network--is the centerpiece of the Army's effort to transition to a lighter, more agile, and more capable combat force. The substantial technical challenges, the cost of the program, and the Army's acquisition strategy are among the reasons why the program is recognized as needing special oversight and review. This testimony is based on GAO's March 12, 2009 report and addresses knowledge gaps that will persist in the FCS program as Congress is asked to make significant funding commitments for development and production over the next several years. The Army will be challenged to demonstrate the knowledge needed to warrant an unqualified commitment to the FCS program at the 2009 milestone review. While the Army has made progress, knowledge deficiencies remain in key areas. Specifically, all critical technologies are not currently at a minimum acceptable level of maturity. Neither has it been demonstrated that emerging FCS system designs can meet specific requirements or mitigate associated technical risks. Actual demonstrations--versus modeling and simulation results--have been limited, with only small scale warfighting concepts and limited prototypes demonstrated. Network performance is also largely unproven. These deficiencies do not necessarily represent problems that could have been avoided; rather, they reflect the actual maturity of the program. Finally, there is an existing tension between program costs and available funds that will likely worsen, as FCS costs are likely to increase at the same time as competition for funds intensifies between near- and far-term needs in DOD and between DOD and other federal agencies. DOD could have at least three programmatic directions to consider for shaping investments in future capabilities, each of which presents challenges. First, the current FCS acquisition strategy is unlikely to be executable with remaining resources and calls for significant production commitments before designs are demonstrated. To date, FCS has spent about 60 percent of its development funds, even though the most expensive activities remain to be completed before the production decision. In February 2010, Congress will be asked to consider approving procurement funding for FCS core systems before most prototype deliveries, the critical design review, and key system tests have taken place. Second, the program to spin out early FCS capabilities to current forces operates on an aggressive schedule centered on a 2009 demonstration that will employ some surrogate systems and preliminary designs instead of fully developed items, with little time for evaluation of results. Third, the Army is currently considering an incremental FCS strategy--that is, to develop and field capabilities in stages versus in one step. Such an approach is generally preferable, but would present decision makers with a third major change in FCS strategy to consider anew. While details are yet unavailable, it is important that each increment be justifiable by itself and not dependent on future increments.
The SSI program provides financial assistance to disabled people whose income and resources are below specified amounts. As of January 1996, about 2.6 million adults and 1.1 million children were receiving SSI disability benefits and an additional 1.1 million adults were receiving both SSI and DI benefits. Most SSI recipients qualify for Medicaid coverage, and 48 states and the District of Columbia also supplement federal SSI payments with state SSI benefits. In 1995, SSI disability recipients received a total of about $21 billion in federal SSI benefits and $2.6 billion in SSI state supplements. Reviewing recipients’ disability status, especially those most likely to improve, is an important component of good program management. Even though the SSI program was created to provide benefits to people who are severely disabled or terminally ill, some people do improve through treatment, surgery, or the passing of time. Amidst concerns about fraud, waste, and abuse in the SSI program, the Congress passed the Social Security Independence and Program Improvements Act of 1994, which required SSA to conduct CDRs on one-third of SSI recipients attaining age 18 and another 100,000 recipients in each of fiscal years 1996 through 1998. The 1996 amendments to the Social Security Act required that SSA conduct CDRs on all low-birth-weight babies within their first year of life and at least once every 3 years on all children under age 18 whose conditions are likely to improve. The amendments single out low-birth-weight babies because historically a relatively high percentage of these babies, about 40 percent, have had their benefits terminated after CDRs. The 1996 amendments replaced the requirement that SSA conduct CDRs on one-third of recipients attaining age 18 with the requirement that SSA redetermine disability eligibility using adult criteria for all recipients attaining age 18.These redeterminations differ from CDRs in that SSA bases decisions for disability eligibility redeterminations on whether recipients meet eligibility requirements; for CDRs, SSA bases eligibility decisions on whether recipients’ impairments have improved since the last determination. Since these disability eligibility redeterminations can be counted as CDRs on SSI recipients, this report examines SSA’s plans to conduct both of these types of reviews. In addition, this report focuses on CDRs of SSI-only recipients because provisions in the 1994 and 1996 laws apply only to recipients who are receiving disability benefits solely under the SSI program, and not under both the SSI and DI programs. Legislation has required CDRs of DI beneficiaries, including those also receiving SSI benefits, since 1980. The number of SSI CDRs required by the 1994 and 1996 legislation represent large increases over the number conducted in previous years. In fact, the number of CDRs required in fiscal year 1996 alone exceeds the total of all SSI CDRs conducted in fiscal years 1991 through 1995 (see table II.1). According to SSA, it conducted few CDRs of SSI recipients in those years because the agency had limited resources and no legal requirement existed. However, because SSA had the authority to conduct SSI CDRs, SSA continued to schedule SSI recipients for CDRs and, as a result, about 1.9 million SSI adults and children are now due or overdue for CDRs. Tables II.2 through II.4 present selected characteristics, including age, impairment, and length of time receiving benefits, for the SSI population who were due or overdue for CDRs in fiscal year 1996. SSA administers the SSI program with the help of state agencies, called disability determination services (DDS). DDSs make disability determinations for SSA, process initial applications, assess recipients’ potential for medical improvement, and set due dates for and conduct CDRs. DDSs determine when recipients will be due for CDRs on the basis of their potential for medical improvement. On the basis of recipients’ impairments and ages, DDS officials classify individuals into one of three categories: medical improvement expected (MIE), medical improvement possible (MIP), or medical improvement not expected (MINE). Individuals are then scheduled for CDRs at 6- to 18-month intervals if classified as MIE, at least once every 3 years if classified as MIP, and once every 5 to 7 years if classified as MINE. In recent years, given limited resources for conducting CDRs and the large backlog of 2.4 million DI CDRs due or overdue, SSA developed new processes in an effort to conduct CDRs in a more cost-effective manner. SSA developed a mailer CDR process to obtain self-reported information on current medical conditions, treatments received, and work activities as a low-cost alternative to full medical CDRs. The full medical CDR process is labor-intensive and generally involves (1) 1 of 1,300 SSA field offices that determines whether disabled recipients continue to meet the financial eligibility requirement regarding income and resources and (2) 1 of 54 state DDSs that determines whether recipients continue to be disabled, which frequently involves medical exams by at least one doctor. The average cost of a full medical CDR is about $1,000, while the average cost of the mailer CDR is between about $25 and $50. In addition, on the basis of the outcomes of previously conducted CDRs on DI beneficiaries, SSA developed statistical formulas to estimate the likelihood of benefit termination as a result of CDRs using recipient characteristics, such as age, impairment, length of time on disability rolls, and previous CDR activity. SSA sends mailer CDRs to a portion of individuals who it estimates, on the basis of its formulas, have the lowest likelihood of benefit termination. Cases selected for mailer CDRs are later sent to DDSs for full medical CDRs only if responses to the mail questionnaire and information used in the formulas to estimate the likelihood of benefit termination warrant a more comprehensive review. For fiscal year 1996, SSA planned to conduct full medical CDRs for the legally required SSI CDRs and to test the mailer CDR process on over 100,000 additional SSI recipients. Table 1 presents, for fiscal year 1996, the number of CDRs SSA specified in planning documents and the number SSA had initiated and DDSs had completed as of June 1996. As of June, DDSs had completed about 60 percent of the required reviews in each category. SSA is currently modifying its CDR plan for fiscal years 1997 through 2002, which were developed before the enactment of new SSI CDR requirements under the Personal Responsibility and Work Opportunity Reconciliation Act of 1996. Under the new requirements, SSA must conduct at least 150,000 SSI CDRs in each of fiscal years 1997 and 1998, which includes disability eligibility redeterminations on 18-year-olds. Prior to the new requirements, SSA’s CDR plan specified more SSI CDRs in fiscal year 1997, but fewer than 150,000 SSI CDRs in fiscal year 1998. Estimates of the minimum number of SSI CDRs that will be required in later years were not available at the time our fieldwork was completed. However, SSA’s plan called for dramatic increases in the number of SSI CDRs in fiscal years 1999 through 2002, ranging from 367,000 to 625,000 per year. SSA plans to use CDR funds to conduct the legally required SSI CDRs and disability eligibility redeterminations in fiscal years 1996 through 2002. In fiscal year 1996, SSA for the first time set aside regular administrative funds for CDRs, and the Congress took steps to increase funding for CDRs. SSA set aside $200 million for both SSI and DI CDRs and plans to continue that level of funding at least through fiscal year 2002. In addition, the Contract With America Advancement Act of 1996 established a new funding mechanismfor CDRs and authorized up to an additional $2.7 billion for SSI and DI CDRs through fiscal year 2002. This was about $1 billion less than the amount SSA had requested from the Congress and believed would be sufficient, along with regular administrative funds, to conduct CDRs on all SSI recipients who were due or overdue for review and all required DI CDRs through fiscal year 2002. The 1996 amendments to the Social Security Act subsequently authorized an additional $250 million for SSI CDRs and disability eligibility redeterminations in fiscal years 1997 and 1998. Combined, regular administrative funding for CDRs and the new budget authority could total over $4 billion in fiscal years 1996 through 2002. Competing priorities, such as conducting legally required DI CDRs, which include an enormous backlog of reviews, may pose challenges to conducting all required SSI CDRs in fiscal years 1997 through 2002. Furthermore, the same DDS staff who conduct SSI CDRs also conduct DI CDRs and process initial applications and other reviews of disability eligibility required by law. Although SSA has estimated that funds are sufficient to conduct all required SSI and DI CDRs in fiscal years 1996 through 2002, in our companion reports we question whether CDR funds are sufficient to meet those CDR goals. To the extent that CDR funds are not sufficient to conduct all required SSI and DI reviews, SSI CDRs may be scaled back, since SSA generally considers them to be less cost-effective than DI CDRs. Only SSI CDRs were scaled back when SSA received less CDR funding for fiscal years 1996 through 2002 than it had requested from the Congress. Also, according to SSA, the ability to conduct CDRs is always vulnerable to unexpected increases in initial applications and disability eligibility redeterminations. In fact, both of the 1996 laws require work in these areas that may compete with CDRs for DDS staff. First, SSA currently estimates that, because the Contract With America Advancement Act eliminated drug and alcohol abuse as a basis for receiving disability benefits, benefits will be terminated for some of the 196,000 SSI recipients and DI beneficiaries whose primary impairments were drug abuse and/or alcoholism. SSA expects many of those terminated to reapply on the basis of other impairments. Second, in fiscal years 1997 and 1998, the 1996 amendments to the Social Security Act require SSA to redetermine the disability eligibility of between 300,000 and 400,000 children currently receiving SSI benefits. Although these disability eligibility redeterminations can count toward the required 100,000 SSI CDRs in those years, the law gives them precedence over required CDRs on other children. SSA is currently evaluating the impact of this and other required work on its ability to conduct CDRs. In fiscal year 1996, SSA only conducted SSI CDRs on a portion of recipients it considered to be cost-effective to review. In general, these included MIEs or MIPs, who make up about one-half of all SSI recipients due or overdue for CDRs. For SSI adult recipients, SSA selected from among those who were classified as MIE or MIP and under the age of 59 in that year. SSA’s 1996 fiscal year plan called for conducting about 100,000 full medical CDRs and 107,900 mailer CDRs of adult MIE and MIP SSI recipients. For children, SSA limited its selection to low-birth-weight babies, who totaled about 7,200. According to SSA, it did not select other children in anticipation of the requirement to conduct disability eligibility redeterminations on between 300,000 and 400,000 children receiving SSI. Among 18-year-old SSI recipients, SSA selected 18,000 MIE and MIP recipients, which is about one-third of SSI recipients attaining age 18. For adult and 18-year-old recipients, SSA used different approaches to select recipients for CDRs. To select adult recipients, SSA used formulas developed for DI beneficiaries to estimate the likelihood of benefits being terminated as a result of a CDR. As is currently done under the DI program, SSA then selected a portion of those with the highest and lowest estimated likelihood of benefit termination for full medical and mailer CDRs, respectively. SSA did not select recipients in the middle range, which contains the majority of recipients included in the estimation process, because in this range, the formulas are less helpful in identifying recipients who are more likely to have their benefits terminated as a result of a CDR and, therefore, to warrant a full medical CDR. SSA has only developed formulas for use in selecting adult recipients. Among 18-year-old recipients, SSA selected a judgmental sample of those it believed would be most likely to have their benefits terminated as a result of a CDR based on some of the characteristics used to select adult cases for CDRs, such as impairment type and length of time receiving SSI. SSA tested the validity of using the DI formulas to estimate the likelihood of benefit termination for adult SSI recipients in its 1995 study of SSI CDRs and its analysis of SSI and DI population characteristics. The SSI and DI programs are subject to the same eligibility requirements, and SSA’s 1995 study of 5,000 adult SSI CDRs found that the formulas differentiated between cases most and least likely to result in benefit terminations about equally well for both disability populations. Since relatively few SSI recipients have ever undergone a CDR, SSA did not use length of time since the last CDR and the number of previous CDRs, which are variables normally included in the DI formulas, when estimating the likelihood of benefit termination for SSI recipients. In fiscal years 1997 and 1998, required disability eligibility redeterminations on children will count toward the requirement to conduct at least 100,000 SSI CDRs in each year. In addition, SSA will be required to conduct CDRs annually on (1) low-birth-weight babies and (2) other children under age 18 at least once every 3 years starting on the date of enactment of the 1996 amendments. In fiscal years 1997 and 1998, therefore, CDRs on children could dominate SSI CDRs. SSA plans to continue to develop and modify the formulas and SSI selection process as it learns more about conducting CDRs on this population. SSA’s current plans for broad CDR process improvements include expanding the use of the formulas to children and certain recipients classified as MINEs in order to select individuals for full medical and mailer CDRs from these recipient categories on the basis of their estimated likelihood of benefit termination. SSA also plans to (1) develop a new type of mailer CDR for gathering information on recipients’ medical conditions directly from their physicians and other treating sources and (2) obtain Medicaid data and integrate the data into the statistical formulas to increase the validity of the estimated likelihood of benefit termination. These latter improvements would allow SSA to better predict which recipients in the middle range of estimated likelihood of benefit termination are more likely to have their benefits terminated as a result of a CDR and, therefore, to warrant full medical CDRs. (We discuss these plans further in app. III.) Reviewing recipients’ disability status, especially those most likely to improve, is one component of a well-managed program. Few recipients voluntarily report medical improvement and leave the rolls. SSA currently estimates that CDRs will remove only about 5 percent of SSI recipients from the rolls in the long run. However, if the CDR process was not in place, recipients’ continuing disability eligibility would be uncertain and the number of ineligible recipients would likely increase over time. On the basis of SSA’s estimate that 5 percent of SSI recipients would have their benefits terminated as a result of CDRs, we estimate that about 95,000 of the approximately 1.9 million SSI recipients currently due or overdue for CDRs are no longer medically eligible for benefits. In fiscal year 1996 alone, these recipients would have received about $481 million in federal SSI benefits and about $418 million in federal and state Medicaid benefits. SSI CDRs on some categories of recipients appear to be cost-effective. Benefit terminations result in SSI and Medicaid savings at both the federal and state levels (see app. IV for information on savings). SSA calculates cost-effectiveness for various recipient categories by comparing (1) the estimated present value of benefit savings due to benefit terminations resulting from CDRs on a category with (2) the estimated total costs of conducting CDRs for that category. Because SSA has little experience conducting SSI CDRs, SSA cautions that estimates of savings resulting from SSI terminations are somewhat tentative. SSA estimates that CDRs on adults SSA has classified as MIE or MIP save about $3 in federal SSI and Medicaid benefits for every $1 spent conducting CDRs on those categories. State savings increase this ratio to $4 saved for every $1 spent. SSI CDRs on low-birth-weight babies are more cost-effective than CDRs on adults—saving about $14 in program benefits for every $1 spent conducting CDRs; however, these children constitute less than 1 percent of the SSI disabled population. SSA estimates that, in general, CDRs on recipients classified as MINE are not cost-effective, and, at best, break even. Increased SSI CDR activity comes at a time when both the Congress and SSA have sought a CDR strategy that is more cost-effective. In the Contract With America Advancement Act, the Congress emphasized maximizing the combined savings from CDRs under the SSI and DI programs. SSA has been working to improve its ability to identify recipients for whom conducting CDRs is cost-effective. Options exist for making SSI CDRs more cost-effective and helping SSA meet the challenge of conducting all required CDRs. In companion reports, we identified two options for improving the CDR process—one that could make CDRs more cost-effective and one that would strengthen return-to-work efforts. In addition to these options, to increase service to the public and more efficiently use resources, SSA is exploring coordinating CDRs with redeterminations of recipients’ financial eligibility. In our companion reports, one option we proposed for improving the CDR process was for SSA to adopt less rigid requirements for scheduling CDRs in order to shift the emphasis from periodic reviews to a system that is more cost-effective. The current system, in which periodic CDRs are scheduled for all SSI recipients, including those with virtually no potential for medical improvement, is a costly approach to identifying the approximately 5 percent of recipients who are likely to have improved to the point of being found ineligible for benefits. Furthermore, the frequency of CDRs is currently based on medical improvement classifications that do little to identify those most likely to have their benefits terminated as a result of a CDR. We found that the estimated likelihood of benefit termination was very similar for recipients classified as MIE and MIP. In addition, although millions of dollars are spent annually to conduct periodic CDRs, some individuals, especially DI beneficiaries for whom SSA is not conducting CDRs, have received benefits for years without having any contact with SSA regarding their disability or their ability to return to work despite continuing disability. We recommended in these reports a three-pronged approach to increasing the cost-effectiveness of CDRs while maintaining program integrity. Specifically, we recommended that SSA replace the routine scheduling of CDRs with a new process that, if extended by the Congress to all recipients, would (1) be cost-effective by selecting for review individuals with the greatest potential for medical improvement and subsequent benefit termination, (2) correct a weakness in SSA’s current CDR process by reviewing a random sample of all other recipients, and (3) improve program integrity by instituting contact with those not selected for CDRs or financial eligibility redeterminations. As part of this effort, we also recommended that the Commissioner of Social Security develop a legislative package to obtain the authority the agency needs to enact this new process for those portions of the SSI and DI populations that are subject to routinely scheduled CDRs. Less rigid requirements regarding the frequency of CDRs are necessary if CDRs are to be conducted primarily on those recipients whose cases are most cost-effective to review—that is, those recipients with the greatest potential for medical improvement. But to maintain program integrity, SSA must keep abreast of the potential for medical improvement of all recipients. Currently, SSA excludes MINE recipients and those aged 59 and older from the selection process altogether. We believe this weakness in the current process could be addressed by conducting CDRs on a random sample of recipients in these or other categories that SSA decides in the future are less cost-effective to review. Instituting periodic contact with recipients who are not chosen for CDRs or financial eligibility redeterminations can help protect program integrity by reminding recipients that their medical conditions are being monitored and serving as a deterrent to abuse by those no longer medically eligible for benefits. More specifically, we believe that a new type of brief mailed contact would, at a minimum, allow SSA to contact a majority of recipients with overdue CDRs in the year it is implemented to remind them of their responsibility to report medical improvements. SSA could also use such contact to gather information to support ongoing or planned initiatives, such as SSA’s return-to-work initiatives or planned improvements to the CDR process. Some SSA officials expressed concern about the cost of this new type of mailed contact with recipients. Although some administrative funds would be used for the contact that might have been used for CDRs or other activities, the contact should result in significant program savings because of the considerable number of recipients who, on the basis of SSA’s experience, can be expected to refuse repeatedly to provide the requested information and, as a result, have their benefits terminated after a prescribed due-process procedure is followed. On the basis of SSA’s experience with CDRs and financial eligibility redeterminations, we assume that about 1 percent of the SSI recipients who were contacted would have their benefits terminated for noncooperation. This benefit termination rate represents a onetime net federal savings of about $230 million from contacting SSI recipients due or overdue for CDRs in fiscal year 1996. (See app. II for a further discussion of estimated savings.) Another option we proposed for improving the use of CDR-related resources was to support return-to-work efforts by better using the CDR process to assess recipients’ work potential, even if there is no medical improvement, and encouraging recipients to obtain vocational rehabilitation (VR) services. With medical advances and new technologies creating more opportunities for disabled people to work, some recipients who do not medically improve may nonetheless be able to engage in substantial gainful activity. In an April 1996 report, we recommended that the Commissioner of Social Security take immediate action to place greater priority on return to work, including designing a more effective means to identify and expand recipients’ work capacities and better implementing existing return-to-work mechanisms. In our companion reports, we recommended that SSA use CDR contacts to identify recipients’ productive capacities, inform them about VR services, and encourage them to work. Currently, through contacts during the CDR process, SSA generally provides little support and assistance to help recipients become self-sufficient. When conducting full medical CDRs, SSA obtains information on VR services received since the initial application or last CDR. However, SSA and DDS staff are neither required nor instructed to assess recipients’ work potential, make recipients aware of rehabilitation services, or encourage them to seek VR services. SSA provides limited encouragement through mailer CDRs by asking respondents to indicate whether they are interested in rehabilitation or other services that could help them obtain work. Those respondents who indicate an interest and appear to be reasonable candidates for rehabilitation are to be referred to state VR agencies. However, on average, only about 8 percent of all SSI recipients and DI beneficiaries are referred for VR services. SSA is exploring the potential for better coordinating SSI CDRs with redeterminations of recipients’ financial eligibility. Each year, SSA reviews the income, resources, and living arrangements of about 2 million SSI recipients to ensure that they still meet SSI’s financial eligibility requirements. Because staff involved in conducting CDRs and financial eligibility redeterminations either are located in the same place or are the same, SSA is hoping to expand coordination to conserve its resources and provide better service to the public. Currently, the only coordination that takes place is on the part of SSA’s field office staff, who are instructed when conducting a CDR to gather financial eligibility redetermination information if the recipient is also due for such a redetermination. In exploring opportunities for coordination, SSA will have to resolve procedural issues that, in the past, served as obstacles to pursuing greater coordination. Over the past 10 years, interest in coordinating the two activities has been thwarted by (1) different schedules for conducting CDRs and financial eligibility redeterminations throughout the year and (2) the lack of compatible databases for SSA field office staff to determine who is scheduled for both CDRs and financial eligibility redeterminations. SSA believes that increased numbers of SSI CDRs and large demands on staff resources will serve as added incentives to overcoming these and other potential obstacles. Congressional action in 1994 prompted an increase in SSI CDR activity that should help SSA identify and remove more ineligible recipients from the program. In 1996, the Congress further increased the number of required CDRs and disability eligibility redeterminations and also increased funding that SSA can use to conduct SSI CDRs in the future. However, SSA will likely face challenges from competing priorities for staff resources, including required DI CDRs. Because of increases in the required number of SSI CDRs; the large backlog of required DI CDRs; and the Contract With America Advancement Act, which emphasizes cost-effectiveness, we identified in companion reports two options that could make the CDR process more cost-effective. We recommended that a more cost-effective approach for determining who receives CDRs may be to (1) review recipients with the greatest potential for medical improvement and subsequent benefit termination, (2) correct a weakness in SSA’s CDR process by reviewing a random sample of all other recipients, and (3) ensure program integrity by instituting contact with recipients not selected for CDRs or financial eligibility reviews. However, for this approach to be cost-effective, SSA needs to be able to accurately estimate the likelihood of benefit termination for all recipients, which it can now only do for portions of those recipients classified as MIE or MIP. Furthermore, using CDR contacts to assess recipients’ potential for and promote VR services and coordinating CDRs with financial eligibility redeterminations could increase the efficient use of CDR resources. In commenting on our draft report, SSA generally agreed with our conclusions regarding its progress in conducting SSI CDRs and stated that this report, along with the companion reports, provided valuable information that would be helpful to the agency in achieving its CDR goals in the future. The agency agreed that SSA should continually seek ways to maintain stewardship of the disability program in the most cost-effective manner and begin to consider which legislative changes, if any, will produce such a result. The agency also stated that it would (1) test using CDR contacts to assess recipients’ potential for and promote VR services and (2) continue to explore options for coordinating CDRs with financial eligibility redeterminations. We also received technical comments from SSA, which we incorporated where appropriate. SSA’s comments are reprinted in appendix V. As agreed with your office, we will send copies of this report to the Commissioner of Social Security. We will also make copies available to others on request. Please contact me at (202) 512-7215 if you or your staff have any questions about this report. Other GAO contacts and staff acknowledgments are listed in appendix VI. This appendix provides additional details concerning our methodology. This information includes the databases and sample used in analyzing characteristics of SSI recipients due or overdue for CDRs in fiscal year 1996. Also included is information on our estimates of (1) savings resulting from benefit terminations after CDRs and (2) onetime savings from our proposed new type of mailed contact. We used Supplemental Security Income Record Description (SSIRD) data as provided and did not evaluate the data’s accuracy. We did our work between September 1995 and August 1996 in accordance with generally accepted government auditing standards. To determine the number of SSI recipients currently due or overdue for CDRs, we used the SSA Office of Disability’s (OD) CDR database. This database contained records on all recipients SSA had determined were due or overdue for a CDR in fiscal year 1996. For purposes of our analyses, we took a random sample of 15 percent of those recipients stratified by whether the (1) recipient was an adult or a child and (2) DDSs had classified as medical improvement expected (MIE), possible (MIP), or not expected (MINE). We eliminated from our sample recipients whose CDR due dates were after fiscal year 1996 or who were over age 65. On the basis of our sample data, we estimated the size of the population with these exclusions. Table I.1 contains initial population and sample sizes and final sizes after adjustments. For the final sample, we obtained information on characteristics from SSA’s SSIRD and OD’s CDR database. From the SSIRD, we obtained information on age, gender, race, impairment, length of time receiving benefits, and length of time overdue for a CDR. Because information obtained from OD did not always differentiate between adult MIE and MIP recipients, we used SSIRD data to classify adults into the two categories. From OD’s CDR database, we obtained information on (1) medical improvement classifications for all children and for adults classified as MINE and (2) estimates of the likelihood of benefit termination for adult MIE and MIP recipients, the only recipient categories for whom likelihood of benefit termination estimates were available. Because we used a sample to estimate characteristics of the universe of recipients due or overdue for CDRs in fiscal year 1996, the reported estimates in tables II.2 through II.4 have sampling errors associated with them. Sampling error is variation that occurs by chance because a sample was used rather than the entire population. The size of the sampling error reflects the precision of the estimate—the smaller the sampling error, the more precise the estimate. In appendix II, the tables in which we report recipients’ characteristics contain sampling errors for reported estimates calculated at the 95-percent confidence level. This means that the chances are about 95 out of 100 that the range defined by the estimate, plus or minus the sampling error, contains the true percentage. With few exceptions, the sampling errors were less than 1 percentage point. This means that for most percentages, there is a 95-percent chance that the actual percentage falls within plus or minus 1 of the estimated percentage. We obtained information from a variety of sources to estimate the present value of savings to federal and state governments resulting from benefits being terminated after CDRs. The present value of savings is the current value, in constant 1996 dollars, of benefits that would have been paid over a recipient’s lifetime had benefits not been terminated. Appendix IV contains our estimates of the present value of savings resulting from SSI CDRs. To calculate savings, we (1) obtained estimates of federal and state SSI and Medicaid savings and (2) calculated increased benefits that would be paid by other programs after SSI benefits had been terminated. From SSA, we obtained an estimate of the present value of federal SSI savings and a formula for estimating the present value of state SSI supplement savings. From the Health Care Financing Administration we obtained estimates of the present value of federal and state Medicaid savings. To calculate offsetting costs from benefits paid by other programs, we used assumptions provided by the Congressional Budget Office regarding increased benefits under the Food Stamp and Aid to Families With Dependent Children (AFDC) programs that former SSI recipients would receive once they no longer qualified for SSI benefits. In calculating increases in Food Stamp program benefits, we assumed that (1) about 50 percent of SSI recipients terminated as a result of a CDR would be receiving food stamps and (2) without SSI benefits, which count as income when determining Food Stamp benefit levels, Food Stamp benefits would increase by about one-third of recipients’ former SSI benefit levels. In calculating offsetting AFDC costs, we assumed that about 50 percent of children who were terminated from the SSI program would be eligible for AFDC and that, on average, families’ AFDC benefits would increase by $70 per month, the marginal per-child AFDC cost. The new type of brief mailed contact proposed in the companion reports would result in program savings because we expect a considerable number of recipients to repeatedly refuse to provide requested information and, as a result, have their benefits terminated. As a condition of receiving benefits, recipients are required to respond to reasonable requests for information. When recipients do not respond, SSA first attempts to contact recipients to determine their reasons for nonresponse. If a recipient refuses to cooperate, SSA then follows procedures to ensure due process in terminating benefits. In calculating savings, we estimated the (1) number of recipients who would be contacted and the percentage who would fail to cooperate, (2) savings per termination, and (3) cost per contact. Table I.2 presents these estimates and summarizes assumptions used in making the estimations. As the table shows, of the approximately 1.9 million recipients who are currently due or overdue for a CDR, we propose that SSA contact the approximately 1,121,000 recipients who we estimate would not be scheduled for either a CDR or a financial eligibility redetermination in that year. We estimated that the number of recipients scheduled for a CDR would be about 236,000 recipients, the number planned for fiscal year 1996. According to SSA, about one-third of SSI recipients receive financial eligibility redeterminations annually, and we estimated that about 552,200 disabled SSI recipients would be scheduled for such redeterminations in fiscal year 1996. Estimated number of recipients terminated as a result of mailed contact Recipients due or overdue for CDR in FY 1996 Recipients not receiving a CDR or financial eligibility redetermination and who would receive mailed contact Recipients receiving mailed contact who would fail to cooperate (at a noncooperation rate of .01) Food Stamp benefits increase that would offset savings Net savings per beneficiary terminated Estimated total savings to the federal government Total cost for initial mailed contact (at $25 per contact) On the basis of SSA’s experience with mailer CDRs and financial eligibility redeterminations, about 1 percent of the recipients who received the mailed contact would have their benefits terminated for continual noncooperation. We estimated that the mailed contact would be responsible for only the first 5 years of savings resulting from terminating SSI recipients’ benefits because of failure to cooperate. We used 5 years for our period of savings because, given SSA’s system for scheduling financial eligibility redeterminations, all SSI recipients would have been contacted at least once within 5 years of the mailed contact. To estimate savings, we used SSA and Health Care Financing Administration estimates of the present value of SSI and Medicaid savings, respectively, that would be realized each year after benefits were terminated as a result of a CDR (see app. IV). We assigned a cost for the initial mailed contact of $25, the lower range of SSA’s estimate for the cost of the current mailer CDR. Because this figure overestimates the costs of the scannable mailed contact, it provides a conservative estimate, including some administrative and developmental costs. Largest sampling error in column at the 95-percent confidence level Average age (mean) Average age (median) Endocrine, nutritional, and metabolic diseases Disorders of blood and blood-forming organs (continued) Mental disorders, excluding mental retardation Skin and subcutaneous tissue disorders Estimated likelihood of benefit termination Average likelihood (mean) Average likelihood (median) Number of years receiving benefits Average years (mean) Average years (median) (continued) Due over 10 years ago Average years (mean) Average years (median) SSA does not estimate the likelihood of benefit termination for children. Largest sampling error in column at the 95-percent confidence level Average age (mean) Average age (median) Endocrine, nutritional, and metabolic diseases Disorders of blood and blood-forming organs Mental disorders, excluding mental retardation Skin and subcutaneous tissue disorders 0 (continued) Estimated likelihood of benefit termination Average likelihood (mean) Average likelihood (median) Number of years receiving benefits Average years (mean) Average years (median) Due over 10 years ago Average years (mean) Average years (median) Includes 18-year-olds. SSA does not estimate the likelihood of benefit termination for children. Largest sampling error in row at the 95-percent confidence level (continued) Largest sampling error in row at the 95-percent confidence level (continued) Data in this table include recipients 1 year older than the current cutoff SSA uses when selecting recipients for CDRs because SSA provided estimates of the likelihood of benefit termination for MIEs and MIPs aged 59 and under. SSA currently limits its CDR selection to recipients under 59. Furthermore, SSA does not estimate the likelihood of benefit termination for children or MINEs. SSA plans to expand and enhance its procedures for selecting SSI recipients and DI beneficiaries for CDRs and conducting the reviews. More specifically, SSA plans to (1) expand the use of formulas for estimating the likelihood of benefit termination to children and certain recipients classified as MINE and (2) obtain medical treatment information about recipients and integrate the data into the process for selecting recipients for CDRs. SSA plans to expand the use of statistical formulas for estimating the likelihood of benefit termination to children and a portion of both the SSI recipients and DI beneficiaries classified as MINE. To develop the formulas to estimate the likelihood of benefit terminations for child SSI recipients, SSA plans to conduct reviews of children by selecting cases from across the range of impairments. According to SSA, this process expansion is not expected to begin until about fiscal year 1998 because of new legislation eliminating the individualized functional assessment (IFA) component of disability eligibility criteria for child recipients. An SSA official explained that the agency is close to validating the use of the formulas for MINEs and plans to begin conducting CDRs on this group in fiscal year 1997. Included in this process expansion will be MINEs who are classified as such because they are older rather than because of their impairment. SSA believes that these age-classified MINEs may be cost-effective to review because some of them may have improved medically to the extent that they are no longer disabled. At this time, SSA does not have any plans to include the MINEs who are classified as such because they are believed to have permanent disabilities. SSA also plans to pursue two approaches for the collection of medical treatment information about recipients. First, SSA has plans to develop a new type of low-cost mailer CDR to be sent to recipients’ physicians and other treating sources. At this time, SSA only selects individuals for CDRs from among the groups with the highest and lowest estimated likelihood of benefit termination for full medical and mailer CDRs, respectively. SSA officials explained that they do not conduct CDRs on SSI recipients or DI beneficiaries with likelihood of benefit termination estimates in the middle range because they believe the formulas do not adequately distinguish between these individuals for purposes of determining who in this group should receive full medical CDRs. According to SSA, if it conducted mailer CDRs on the middle group, this would likely result in more beneficiaries being subsequently referred for full medical CDRs than would be cost-effective. Similarly, if it conducted full medical CDRs on the middle group, it would be using a higher-cost process than SSA believes is necessary for some in this group. SSA believes the new mailer CDR to physicians and other treating sources would provide information about medical conditions and treatments received that would help SSA to determine who in the middle group has a likelihood of benefit termination warranting a full medical CDR. Second, SSA plans to obtain Medicaid data and integrate the data into the statistical formulas to increase the validity of the estimated likelihood of benefit termination. SSA expects that the additional information will also allow it to better identify the appropriateness of a mailer or full medical CDR for recipients with estimates of the likelihood of benefit termination in the middle range. Given that the majority of SSI recipients and DI beneficiaries for whom likelihood of benefit termination is estimated fall into the middle range of estimates, these CDR process enhancements are particularly critical to SSA’s ability to meet its CDR goals over the next 7 years. Our calculations of present value savings are based on estimates provided by SSA and the Health Care Financing Administration and assumptions provided by the Congressional Budget Office on offsetting Food Stamp and AFDC costs. For SSI CDRs, savings result from SSI and Medicaid benefits being terminated for recipients who no longer meet the program’s definition of disability. Table IV.1 contains the present value of federal savings per CDR termination. As the table indicates, the present value of savings to the federal government per CDR termination is, for adults, $42,000 and for children, $33,000. This means, for example, that for every adult for whom a CDR results in a termination, the federal government could expect to save, on average, $42,000 (in constant 1996 dollars) that it would have paid over the recipient’s lifetime had benefits not been terminated. The savings for children are less than those for adults, primarily because, even after being terminated from the SSI program, a majority of children would continue to qualify for Medicaid benefits on the basis of their families’ economic status or their participation in AFDC. As the table shows, these estimates also take into account offsetting costs resulting from increases in AFDC and Food Stamp benefits that some former SSI recipients would receive once they no longer qualified for SSI. Not applicable. Because many states pay SSI state supplements and all states share in Medicaid, SSI CDRs also result in states realizing SSI and Medicaid savings. Because benefit levels vary across states, the present value of savings also varies. Table IV.2 shows, for the five states with the largest total state supplement payments in fiscal year 1994, a range of over $10,000 in the present value of SSI supplement savings. States’ savings also vary because the size of their recipient populations differ. Table IV.2 shows the wide variation in potential state savings based on (1) the number of disabled individuals currently receiving state SSI supplements and due and overdue for CDRs and (2) SSA’s current estimates of a 5-percent benefit termination rate. The present value of state Medicaid savings would average about $11,100 for adults and $4,800 for children. Fiscal year 1994 total supplements paid (in millions) In addition to those named above, the following persons made important contributions to this report: Chris C. Crissman, Assistant Director; Kerry Gail Dunn, Senior Evaluator; Julian M. Fogle, Senior Evaluator; Ann Lee, Senior Evaluator; Elizabeth A. Olivarez, Evaluator; Susan K. Riggio, Evaluator; and Ann T. Walker, Evaluator (Database Manager). Social Security Disability: Alternatives Would Boost Cost-Effectiveness of Continuing Disability Reviews (GAO/HEHS-97-2, Oct. 16, 1996). Social Security Disability: Improvements Needed to Continuing Disability Review Process (GAO/HEHS-97-1, Oct. 16, 1996). Supplemental Security Income: Some Recipients Transfer Valuable Resources to Qualify for Benefits (GAO/HEHS-96-79, Apr. 30, 1996). SSA Disability: Program Redesign Necessary to Encourage Return to Work (GAO/HEHS 96-62, Apr. 24, 1996). PASS Program: SSA Work Incentives for Disabled Beneficiaries Poorly Managed (GAO/HEHS-96-51, Feb. 28, 1996). SSA Rehabilitation Programs (GAO/HEHS-95-253R, Sept. 7, 1995). Supplemental Security Income: Disability Program Vulnerable to Fraud When Middlemen Are Used (GAO/HEHS-95-116, Aug. 31, 1995). Social Security Disability: Management Action and Program Redesign Needed to Address Long-Standing Problems (GAO/HEHS-95-233, Aug. 3, 1995). Supplemental Security Income: Growth and Changes in Recipient Population Call for Reexamining Program (GAO/HEHS-95-137, July 7, 1995). Disability Insurance: Broader Management Focus Needed to Better Control Caseload (GAO/T-HEHS-95-164, May 23, 1995). Supplemental Security Income: Recipient Population Has Changed as Caseloads Have Burgeoned (GAO/T-HEHS-95-120, Mar. 27, 1995). Social Security: Federal Disability Programs Face Major Issues (GAO/T-HEHS-95-97, Mar. 2, 1995). Social Security: Rapid Rise in Children on SSI Disability Rolls Follows New Regulations (GAO/HEHS-94-225, Sept. 9, 1994). Social Security: New Continuing Disability Review Process Could Be Enhanced (GAO/HEHS-94-118, June 27, 1994). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the Social Security Administration's (SSA) strategy for conducting legally required continuing disability reviews (CDR) on Supplemental Security Income (SSI) recipients, focusing on: (1) SSA plans to conduct legally required SSI CDR in fiscal years 1996 through 1998; (2) the resources committed to meeting this requirement; (3) how SSA selects recipients for SSI CDR; (4) the potential benefits of conducting CDR on the SSI population; and (5) potential options for improving the CDR process. GAO found that: (1) SSA planned to conduct required CDR on about 118,000 SSI recipients in fiscal year (FY) 1996; (2) SSA also planned to conduct an additional 100,000 CDR on SSI recipients that were not legally required; (3) as of June 1996, SSA had completed about 60 percent of the required CDR; (4) other competing priorities may make it difficult for SSA to conduct all required SSI CDR after FY 1996; (5) in FY 1996, SSA limited its selection for CDR to those recipients for whom medical improvement is either expected or possible; (6) SSA estimates that conducting CDR will result in removing only about 5 percent of SSI recipients from the rolls, but without CDR, the number of ineligible recipients will likely increase over time; (7) SSA estimates that conducting CDR on SSI adult recipients for whom medical improvement is expected or possible results in about $3 in federal program savings for every $1 spent conducting CDR; and (8) SSA needs to establish less rigid requirements for determining who should be scheduled for CDR, ensure that contact is made with all SSI recipients, and develop a legislative proposal to obtain the authority needed to extend this new process to all recipients.
The Department of Energy’s (DOE) PMAs sell power primarily to preference customers—cooperatives and public bodies, such as municipal utilities, irrigation districts, and military installations—that are located in the PMAs’ service territories. Many of these preference customers then resell the power to industrial, commercial, and/or residential end-users. To estimate any potential rate changes if market rates are charged (after a divestiture of the PMAs or otherwise), we calculated how much, in cents per kilowatthour (kWh), each preference customer paid, on average, for power purchased from (1) all sources, including the PMAs, and (2) sources other than the PMAs, including the wholesale market, in 1995. Then, we took the difference between these two, considering the latter to be the market rate. To map the areas that preference customers serve, we identified the counties and towns that the customers reported serving in Electrical World: Directory of Electric Power Producers. It is important to note that our analysis included only those customers that purchased power directly from the PMAs and that our analysis shows higher rate increases than would be likely if market rates decline. To develop information on the characteristics of the areas that preference customers reported serving, for each county and town we obtained data on 1989 household incomes and the extent to which the population is urban or rural, as reported in the 1990 census, the latest data available. Appendix III provides additional details on our scope and methodology. Overall, about 68 percent of Southeastern’s, Southwestern’s, and Western’s preference customers may experience relatively small rate increases. In our analysis, the increases that we considered relatively small (0.5 cent per kWh or less), moderate (from greater than 0.5 cent up to 1.5 cents), and relatively large (greater than 1.5 cents) represent amounts above the average rates that preference customers paid for power from all sources (both PMAs and others) in 1995. These base rates typically ranged in 1995 from 3.5 to 6.0 cents for Southeastern’s preference customers, from 1.5 to 3.5 cents for Southwestern’s preference customers, and from 1 to 4 cents for Western’s preference customers. The increases represent the difference between these average rates and what preference customers would have to pay if they purchased all of their power at market rates. For example, if a preference customer of Southeastern paid a combined 3.5 cents per kWh for power from the PMA and other sources in 1995 and paid 3.9 cents for power from non-PMA sources, we assumed the customer’s rates would rise from 3.5 to 3.9 cents—a relatively small increase of 0.4 cent—if it had to pay market rates for all its power. Our calculation of the increase in a residential end-user’s monthly electricity bill represents the amount of the preference customer’s increase times the average monthly consumption of electricity by residential end-users in the preference customer’s state. As shown in figure 1, 98 percent of Southeastern’s preference customers may see relatively small rate increases of 0.5 cent per kWh or less if they pay market rates for PMA power. For Western and Southwestern, about half of their preference customers would see relatively small rate increases and about 25 to 30 percent of the customers for each PMA would see relatively large increases. Figure 2 breaks out these potential increases by state and, of the total amount of power consumed in each state, indicates the percentage provided by the PMA. Estimated increase (cents/kWh) Relatively small, 0.5 cent (or one-half cent) or less Moderate, greater than 0.5 cent to 1.5 cents Relatively large, greater than 1.5 cents (In 85 percent of the cases, this increase is between 1.5 cents and 3 cents.) (Figure notes on next page) Estimated increase in preference customers' rates (cents/kWh) As shown in figure 2, in virtually every state Southeastern serves, at least 85 percent of the preference customers may see relatively small rate increases. Slightly more than half of the PMA’s preference customers may see increases of less than 0.1 cent per kWh. If these preference customers pass their rate increases through proportionally to the residential end-users they serve, the residential end-users would see their average monthly electricity bill increase by $1 or less. In most of Southeastern’s states, the maximum increase that a preference customer would pass on to its residential end-users ranges between $1 and $8 per month, depending on the state. The only relatively large rate increase for a preference customer served by Southeastern may be in Illinois, which has one preference customer. In states served by Western, preference customers may see a variety of rate increases. For example, as shown in figure 2, over 75 percent of the preference customers in California, Colorado, and Nebraska may experience relatively small rate increases. In these three states, residential end-users served by most preference customers would see less than $2.50 increases in their average monthly electricity bills. However, a significant number of Western’s preference customers may see moderate increases. As shown in figure 2, at least 25 percent of the preference customers in many Western states, such as Iowa, Minnesota, and South Dakota, may experience average rate increases from greater than 0.5 cent up to 1.5 cents per kWh. If these preference customers proportionally pass these costs along to their residential end-users, the end-users would pay from $3 to $14 more in their average monthly electric bills, depending on the state. Finally, in several states served by Western, a number of preference customers may see average rate increases that exceed 1.5 cents per kWh. For example, 60 percent of the preference customers in South Dakota and 33 percent of the customers in Utah may see rate increases exceeding 1.5 cents per kWh. In turn, residential end-users who receive power from these utilities would see larger increases in their electricity bills. For example, in states with larger rate increases, if a preference customer’s rate increases by 1.5 cents per kWh, residential end-users would pay about $10 to $15 more per month for electricity, depending on the state. Preference customers who may see these larger increases typically paid relatively low rates, ranging from 1.5 to 3.0 cents per kWh and bought most or all of their power from Western. Taken together, Southwestern’s preference customers may experience higher rate increases than Southeastern’s customers but lower increases than Western’s. As shown in figure 2, in most of Southwestern’s states, a majority of the preference customers may see relatively small increases of 0.5 cent per kWh or less on base rates that typically ranged from 1.5 to 3.5 cents. In turn, residential end-users that receive power from most of Southwestern’s preference customers’ would see their electricity bills increase by less than $3 a month. However, in Oklahoma, 79 percent of the preference customers may see larger increases that exceed 1.5 cents per kWh. Most of these customers paid less than 1.5 cents per kWh—less than half the 1995 national average market rate—and purchased all of their power from Southwestern. Residential end-users of these preference customers typically would pay about $22 more in their average monthly electricity bills. As we discussed in our March 1998 report, it is important to remember that in many cases where rate increases may be relatively large (greater than 1.5 cents per kWh), the preference customers paid about 1 to 1.5 cents per kWh in 1995 for PMA power. These rates on average were about 2.5 to 3 cents per kWh lower than what utilities paid in the private market nationwide. Conversely, in many cases where rate increases may be relatively small, that is, 0.5 cent per kWh or less, preference customers generally paid rates close to the market rates. If market rates are charged (after a PMA divestiture or otherwise), preference customers would pay the same rates as utilities that lack access to PMA power. As we discussed in our March 1998 report, if the Congress chose to change the status quo regarding rates, it could mitigate the size of potential rate increases by using several approaches, such as establishing rate caps. A preference customer’s rate increase also depends on what portion of its total power comes from the PMA. Generally, the less a preference customer relies on a PMA’s power, the less the rate increase may be. Preference customers in states served by Southeastern may experience small increases because they purchase a small portion of their power from the PMA. In 1995, 99 percent of Southeastern’s preference customers purchased less than 25 percent of their power from the PMA. Overall, most preference customers purchase a majority of their power from sources other than the PMAs and, as a result, currently pay market rates for that power. In contrast, preference customers that purchase a large portion of their power from a PMA are more likely to experience larger increases. For example, among the 60 percent of the preference customers in South Dakota that may experience rate increases of at least 1.5 cents per kWh if market rates are charged, most bought over 70 percent of their power from the PMA in 1995. Overall, PMA power represented about 23 percent of South Dakota’s total electricity consumption in 1995. Usually, preference customers that rely on a PMA for most or all of their power are smaller utilities that deliver 100,000 megawatthours or less to their end-users annually. It is important to also note that because our estimates of potential rate increases are based on market rates in 1995, our methodology is conservative. If prices for wholesale power decline in the future, as many industry analysts and DOE officials believe they will, customers’ rate increases generally will be smaller than our estimates. Finally, the likely rate increases we discuss—from relatively small to relatively large, if the preference customers pay market rates for PMA power—would usually affect a relatively small portion of the power consumed in each state, as shown by the shading or patterns in the states in figure 2. We found that the portion of the total power consumed in a state that was provided by the three PMAs was generally relatively small. For example, the PMAs provided 5 percent or less of the total power consumption in 22 of the 29 states in our analysis. The average for the 29 states was 2 percent. As shown in figure 3, preference customers that are directly served by Southeastern, Southwestern, and Western reported serving varying portions of 29 states across the nation. We did not include customers that receive PMA power indirectly, that is, through direct preference customers of the PMAs—generation and transmission cooperatives and municipal joint action agencies—because, with very few exceptions, the PMAs’ 1995 annual reports do not list them. Our map includes only the counties and towns that preference customers serve directly and does not include preference customers such as generation and transmission cooperatives or municipal joint action agencies, who buy PMA power and then resell it to other publicly owned utilities. The annual reports for Southeastern and Western do not include these utilities as customers, while Southwestern’s does in two cases. As the figure shows, in most states, the areas the preference customers reported serving directly cover less than half the state. For example, very small portions of Arkansas, Louisiana, and Missouri are served by preference customers. In some cases, small areas are served in part because only a few preference customers directly serve the state. For example, Illinois and Wisconsin have only one preference customer to serve residential end-users, while Kentucky and Montana have three. Other states have more preference customers, but in some cases they serve counties and towns that are concentrated in portions of the state. For example, South Carolina’s 26 preference customers reported serving areas almost exclusively in the northwestern corner of the state. The seven preference customers in Wyoming reported serving four counties, which are clustered in the southwestern and south central portions of the state and two towns but no other areas in the rest of the state. Similarly, the eight preference customers in Texas reported serving 17 counties in the south central part of the state and five towns in eastern Texas but did not serve the rest of the state. Additionally, as depicted in figure 3, large portions of several of Southeastern’s and Western’s states receive service directly from preference customers. For example, preference customers reported serving almost every county in Georgia and most of the counties in North Carolina and Virginia. In these states, many counties received service from two or more preference customers. In Nebraska, preference customers reported serving over 130 towns located around the state. Finally, regardless of their geographic coverage, we found, as shown in figure 2, that the preference customers generally provided a relatively small portion of the total power consumed in each state. Although preference customers serve areas with incomes lower than the national average, most of the households they serve have incomes that are similar to those in the entire state. As shown in figure 4, in 21 of 28 states, households in the counties and towns preference customers report serving had median incomes within 15 percent of the statewide median income, as reported in the 1990 census. In some states, the median incomes of the end-users and statewide are close because almost every county in the state receives power from preference customers. For example, in Georgia, preference customers reported serving 151 of the state’s 159 counties. Furthermore, the distribution of income for households receiving PMA power generally mirrors the distribution of household income in the entire state. For example, in Alabama, about 35 percent of the households in preference customers’ service areas had annual incomes of less than $15,000 in 1989, while 24 percent had incomes exceeding $40,000. Similarly, in the entire state, 33 percent of the households had annual incomes under $15,000, and 26 percent had annual incomes exceeding $40,000. This compares with the 1995 national average household income of $35,004. However, in a few cases, preference customers serve areas that are significantly poorer than the remainder of the state. For example, in 1989 in Texas, the median income of the households preference customers served was almost 40 percent lower than median household income of $27,016 in the entire state. In preference customers’ service areas, over 45 percent of the households had annual incomes smaller than $15,000, compared with 28 percent of the households in the entire state. Similarly, in Montana, preference customers served households with median incomes 20 percent, or about $4,500, below the state median of $22,988. In commenting on our draft report, DOE officials noted that the PMAs also provide a valuable service to Indian reservations, which are among the poorest areas of the nation. Our analysis shows, for example, that about 53 percent of the households in Shiprock, New Mexico, which is located on the Navajo Indian Reservation, had median incomes of less than $15,000, compared with about 31 percent statewide. In contrast, preference customers in some states send PMA power to a number of counties and towns where a large portion of the households have relatively high incomes. For example, in California, preference customers reported serving areas in the southern part of the state such as Orange County, where about 45 percent of the households had incomes in 1989 exceeding $50,000—at least 40 percent higher than the state median income of $35,798. In the northern part of the state, preference customers reported serving Palo Alto, where 55 percent of the households had incomes exceeding $50,000. Throughout California, about 45 percent of the households in areas preference customers reported serving had annual incomes exceeding $40,000. Similarly, in Colorado, about 33 percent of the households in Aspen had 1989 incomes that exceeded $50,000 or at least 65 percent greater than the state median income of $30,140. We estimate that, overall, about 53 percent of the towns that preference customers reported serving are urban. States where most of the towns are urban include California, Georgia, and North Carolina. In addition, about 47 percent of the towns preference customers reported serving are rural. States where large numbers of these towns are rural include Florida, Iowa, and Nebraska. Less than 1 percent of the towns are “mixed”—that is, they have populations that are neither urban nor rural. Most counties that preference customers reported serving, or about 52 percent, are mixed. Alabama and South Carolina, for example, have high percentages of mixed counties. About 39 percent of the counties are rural, about 9 percent of the counties are urban. North Dakota and South Dakota have large proportions of rural counties. Finally, although preference customers sell PMA power in many less densely populated areas, most of the households they serve are located in a small number of more urbanized places. This suggests that most PMA power is consumed by customers in more highly urbanized places. For example, although preference customers reported serving 150 counties in Georgia, 11 of those counties contain over half of the households in the areas preference customers reported serving. We provided copies of a draft of this report to DOE for its review and comment. We received comments from DOE’s Power Marketing Liaison Office, which is responsible for Southeastern, Southwestern, and Western, and have included its comments and our responses as appendix V. DOE commented that our data sources were flawed because we relied on incomplete and/or inaccurate data and that it was impossible to have confidence in conclusions drawn from analysis of the data. To address each of our objectives, our analyses used data reported by the PMAs and their preference customers—data that we believe to be the best available. DOE recognizes that obtaining complete data on the electric utility industry is not easy. We believe that we used the data appropriately to satisfy the objectives of our review and that our methodology is sound. However, we agree that the data we used have limitations, and we have pointed out the limitations in our report. Many of the concerns that DOE expressed do not deal with the data we used but with the definition of a preference customer of a PMA. DOE stated that our analyses omitted generation and transmission cooperatives, their members, and municipal joint action agencies. For our analysis, we included only the preference customers who purchased power directly from the PMAs—as listed in the PMAs’ 1995 annual reports. We did not include generation and transmission cooperatives, their members, or municipal joint action agencies because the annual reports of two of the three PMAs—Southeastern and Western—do not include them in their lists of customers. Because Southeastern and Western together represent over 90 percent of the total preference customers of the three PMAs, we used their approach. If our rate analysis had included the utilities that indirectly buy PMA power through preference customers, the rate increases for these utilities would have been, at most, the same as the increases for the preference customers. For our analysis of urban/rural populations, we used the counties and towns that the preference customers that were included in our rate analysis reported to Electrical World: Directory of Electric Power Producers. We acknowledge that the data in Electrical World may not match the actual service territories. However, we used these data because they were reported by the preference customers and were the best available. DOE also stated that using 1995 data does not reflect today’s market situation. As the electricity market continues to evolve, many industry experts believe that market rates for wholesale power have declined since 1995 and will fall farther. If market rates fall more than the PMAs’ rates, our estimates of rate increases will prove to be overstated. We have seen no evidence that the PMAs’ overall rates have fallen more than rates in the wholesale market. DOE also commented that more balance was needed in our report because the report goes beyond reporting data and does not present all opposing points of view. We believe that our report is balanced and that, throughout the report, we present a neutral description of our objectives and findings. Nevertheless, we have added additional detail to our report, such as including all options for the PMAs’ future role in the changing electricity market and noting that a PMA provides power to Native American households with low incomes. We met with officials from the American Public Power Association and National Rural Electric Cooperative Association, which are national representatives of the PMAs’ preference customers, and discussed the methodology we used to perform our analysis and the results we obtained. On our rate analysis, the officials commented that certain preference customers may see larger rate increases than what we estimated because, to replace the power they buy from the PMAs, they would pay more than what they paid for non-PMA power in 1995. The officials also commented that, although we classified many of the rate increases as relatively small, these increases could nonetheless have significant economic impacts on preference customers or their end-users. However, these officials said that they could not provide more detailed comments until they and the members of their organizations had an opportunity to review the final report and its appendixes. We also met with representatives of the Edison Electric Institute and discussed the methodology we used to perform our analysis and the results we obtained. They commented that our analysis was credible, although they suspected that it could have overstated the rate increases that may occur because competition is increasing and market rates for electricity have been declining. They believe that the impact of the preference customers’ paying market prices for power would be quite modest. They also commented that, if the wholesale rate impacts were translated to the prices that the ultimate consumer would see, the impacts would be even less. We conducted our review from May through November 1998 in accordance with generally accepted government auditing standards. As agreed with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from the date of this letter. At that time, we will send copies to appropriate House and Senate committees and subcommittees; interested Members of the Congress; the Administrators of Southeastern, Southwestern, and Western; and other interested parties. We will also make copies available to others upon request. If you have any questions or need additional information, please contact me on (202) 512-3841. Major contributors to this report are listed in appendix VI. The Tennessee Valley Authority (TVA) sells power to 159 municipal and cooperative distributors and to a number of directly served large industrial customers and federal agencies. As shown in figure I.1, TVA sells power to customers located in Tennessee and parts of six other states in the Southeast. According to the Southeastern Power Administration (Southeastern), TVA purchased nearly 1.9 billion kilowatthours (kWh) of electricity from the PMA in fiscal year 1995 and over 2.9 billion kWh of electricity in fiscal year 1996 for resale to TVA’s municipal and cooperative distributors. However, as described in appendix III, we did not include Southeastern’s sales to TVA in our analysis. As figure II.1 shows, according to the Department of Energy (DOE), its power marketing administrations (PMA) serve preference customers located in all or parts of 34 states. Bonneville Power Administration Southeastern Power Administration Southwestern Power Administration Western Area Power Administration Both Western and Southwestern market power in Kansas. Since the New Deal, the federal government has established about 130 water projects that—in addition to promoting agriculture, flood control, navigation, and other activities—produce electric power. To sell this power to large portions of rural America, the federal government created five PMAs and TVA. Now that nearly all of America has electricity, some believe the PMAs have completed their mission and should be divested. Others suggest that the PMAs be required to charge market rates for power. However, since PMAs have historically served rural areas, concerns have been raised that a change in PMAs’ ownership or the means by which they establish rates could adversely affect the rural or poorer areas they serve. Yet few analyses to date have identified the places that ultimately consume PMA power or the characteristics of the households the preference customers serve. To aid in congressional deliberations on the future role of the PMAs, you requested that we provide a state-by-state analysis of the preference customers who buy power from Southeastern, Southwestern, and Western. More specifically, you asked that we identify (1) the extent to which preference customers’ rates may change by state if market rates are charged, (2) the areas the three PMAs’ preference customers report serving, and (3) the incomes in these areas and the extent to which they are urban or rural. To estimate how much preference customers’ rates may change if the customers paid market rates for the power they currently purchase from the PMA, we calculated the average rates that each PMA preference customer paid for wholesale power from (1) all sources, including the PMAs, and (2) sources other than the PMA, including the wholesale market, in 1995. Then, we took the difference between these two rates, considering the latter to be the market rate. Estimating the potential changes required several steps and assumptions. First, to calculate how much preference customers paid for the PMAs’ power, we obtained data from Southeastern’s, Southwestern’s, and Western’s fiscal year 1995 annual reports. Then, to learn how much each preference customer paid for the power it purchased from other sources, we used the “sales for resale” databases compiled by the Energy Information Administration (EIA). We found that for about one-third of the three PMAs’ preference customers, EIA’s data lacked the volumes of wholesale power the customers purchased from non-PMA sources, the amounts the customers paid for power, or both. In these cases, we assumed the customer paid a rate equal to the average market rate paid by customers of the same type (for example, municipal utilities and cooperatives) for wholesale power in the customer’s state. We then combined each preference customer’s purchases of PMA power and non-PMA power to estimate how much the customer paid for wholesale power from all sources in 1995. Second, to estimate how each preference customer’s rates would change if it paid market rates for PMA power, we assumed that the customer would pay a rate equal to the average rate it paid for wholesale power from sources other than the PMA(s) in 1995. We used this assumption because it is likely that in the period immediately after a divestiture, the new owners of the PMAs’ assets would charge the prevailing market rates for wholesale power in the area. We also took this approach because we were unable to obtain forecasts of future wholesale rates. Although EIA used its National Energy Modeling System to forecast future electricity rates, according to agency officials, its projections are only for retail rates. Others’ projections of future wholesale rates are proprietary. Finally, we compared the average rate each preference customer paid for all its power in 1995 with the rate the customer paid for the power it purchased from sources other than the PMA. The difference in these two rates represents our estimates in cents per kWh of each customer’s potential increase in average rates if it paid market rates for the power it currently purchases from the PMA. After estimating how much preference customers’ rates may change, we analyzed the rate changes by state. To do this, we had to determine the state in which each preference customer primarily sells power. We obtained state designations for each preference customer from EIA’s Form 861 database of utilities for 1995. However, in cases where the preference customer did not sell retail power, EIA did not provide a state designation. In these instances, we consulted EIA’s PURCH and SALES databases of wholesale electricity transactions in 1995 and assigned the preference customer to the state where it sold most of its wholesale power. In the few cases where the preference customer did not sell a large majority of its power to a single state, we assigned the preference customer to the state where it is listed in the Electrical World: Directory of Electric Power Producers (1997 ed.). Because we assumed that, after a divestiture, each customer would pay a rate for power that equals what the preference customer paid for non-PMA power in 1995, our methodology is conservative. If prices for wholesale power decline in the future, as many industry analysts and DOE officials believe they will, customers’ rate increases would be smaller than our estimates. It is important to note that we estimated potential rate increases for the preference customers that the PMAs listed in their 1995 annual reports. These customers buy power directly from the PMA. We did not include utilities that indirectly buy PMA power through direct preference customers such as generation and transmission cooperatives and municipal joint action agencies. We did not include these indirect customers because, with very few exceptions, the PMAs did not count them as customers in their 1995 annual reports. To estimate how each preference customer’s rate change would affect the rates paid by its residential end-users, we assumed that (1) the preference customer would pass the rate change on proportionally to its end-users and (2) that each state’s residential end-users would consume a quantity of electricity equal to the average residential consumption for that state in 1995, according to EIA. The monthly increase in a residential end-user’s electricity bill equals the preference customer’s rate increase after the PMA begins charging market rates (in cents per kWh) times the residential end-user’s average annual electricity consumption for the appropriate state (in kWh), divided by 12. To define the preference customers’ service areas, we identified the counties and/or towns in Electrical World: Directory of Electric Power Producers (1997 ed.) that each of the customers in our analysis reported serving. As was true with our rate analysis, we included only the preference customers that purchased power directly from the PMAs—that is, those customers the PMAs listed in their 1995 annual reports. If we had included utilities that indirectly purchase PMA power (through direct preference customers), such as generation and transmission cooperatives and municipal joint action agencies, more counties and towns would be shown on our state service territory maps. According to DOE officials, many additional counties would be shaded in, among other states, Montana, South Carolina, and Wyoming. To examine the incomes in areas that ultimately consume PMA power, we obtained 1990 census data (based on calendar year 1989) from the Census Bureau on household incomes in each county and town the preference customers reported serving in Electrical World. To determine the degree to which preference customers’ service areas were urban or rural, we obtained 1990 census data from the Census Bureau on the urban and rural populations in each county and town the preference customer reported serving. We classified a county or town as urban or rural if at least 80 percent of its population is urban or rural as defined by the Census Bureau. If the county’s or town’s population is less than 80 percent urban or rural, we classified it as “mixed.” Because the PMAs historically are believed to have served areas that had lower median incomes and were less urbanized, our use of census data from 1990 yields conservative results, as income and urban populations generally increase over time. We conducted our review from May through November 1998 in accordance with generally accepted government auditing standards. We provided a draft of this report to DOE’s Power Marketing Liaison Office, which represents the views of Southeastern, Southwestern, and Western. Its comments and our responses are included in appendix V. We also met with representatives of the American Public Power Association, the Edison Electric Institute, and the National Rural Electric Cooperative Association—national organizations representing groups concerned with the pricing of power provided by the PMAs, among other things—to discuss our methodology and the results of our review. This appendix provides, for each state that receives PMA power from Southeastern, Southwestern, and/or Western, (1) the counties and towns that the preference customers included in our analysis report serving and a map showing these areas and (2) the estimated rate changes if market rates are charged, by number and percentage of preference customers; household incomes in areas potentially receiving power; the extent to which these areas are urban or rural; and the extent to which the individual state’s total power consumption is provided by the PMA(s) through the preference customers included in our analysis. To define the preference customers’ service areas, we identified the counties and/or towns in Electrical World: Directory of Electric Power Producers (1997 ed.) that each of the customers considered in our analysis reported serving. As was true with our rate analysis, we included only the preference customers that purchased power directly from the PMAs—that is, those customers that the PMAs listed in their 1995 annual reports. If we had included utilities that indirectly buy PMA power (through direct preference customers), such as generation and transmission cooperatives and municipal joint action agencies, more counties and towns would be shown on our state service territory maps. According to DOE officials, many additional counties would be shaded in, among other states, Montana, South Carolina, and Wyoming. Figure IV.1: Potential Service Areas - Alabama Counties (potential service areas) Urban/rural classification of preference customers’ reported service areas PMA-provided (in kWh) State total (in kWh) Counties (potential service areas) Towns (potential service areas) Urban/rural classification of preference customers’ reported service areas PMA-provided in kWh) State total (in kWh) Counties (none) Towns (potential service areas) Urban/rural classification of preference customers’ reported service areas PMA-provided (in kWh) State total (in kWh) Counties (potential service areas) Towns (potential service areas) Urban/rural classification of preference customers’ reported service areas PMA-provided (in kWh) State total (in kWh) Counties (potential service areas) Towns (potential service areas) Urban/rural classification of preference customers’ reported service areas PMA-provided (in kWh) State total (in kWh) Counties (potential service areas) Towns (potential service areas) Urban/rural classification of preference customers’ reported service areas PMA-provided (in kWh) State total (in kWh) Towns (potential service areas) Counties (potential service areas) Urban/rural classification of preference customers’ reported service areas PMA-provided (in kWh) State total (in kWh) Counties (potential service areas) Towns (potential service areas) Urban/rural classification of preference customers’ reported service areas PMA-provided (in kWh) State total (in kWh) Towns (potential service areas) Urban/rural classification of preference customers’ reported service areas PMA-provided (in kWh) State total (in kWh) Counties (potential service areas) Towns (potential service areas) Urban/rural classification of preference customers’ reported service areas PMA-provided (in kWh) State total (in kWh) Counties (none) Towns (potential service areas) Urban/rural classification of preference customers’ reported service areas PMA-provided (in kWh) State total (in kWh) Counties (none) Towns (potential service areas) Urban/rural classification of preference customers’ reported service areas PMA-provided (in kWh) State total (in kWh) Towns (potential service areas) According to DOE, no preference power is sold to customers in eastern Minnesota, although some customers have their headquarters offices in that part of the state. Urban/rural classification of preference customers’ reported service areas PMA-provided (in kWh) State total (in kWh) Counties (potential service areas) Towns (potential service areas) Urban/rural classification of preference customers’ reported service areas PMA-provided (in kWh) State total (in kWh) Counties (potential service areas) Towns (potential service areas) Urban/rural classification of preference customers’ reported service areas PMA-provided (in kWh) State total (in kWh) Counties (potential service areas) Towns (potential service areas) Urban/rural classification of preference customers’ reported service areas PMA-provided (in kWh) State total (in kWh) Counties (potential service areas) Urban/rural classification of preference customers’ reported service areas PMA-provided (in kWh) State total (in kWh) Counties (potential service areas) Towns (potential service areas) Urban/rural classification of preference customers’ reported service areas PMA-provided (in kWh) State total (in kWh) Counties (potential service areas) Towns (potential service areas) Urban/rural classification of preference customers’ reported service areas PMA-provided (in kWh) State total (in kWh) Towns (potential service areas) Counties (potential service areas) Urban/rural classification of preference customers’ reported service areas PMA-provided (in kWh) State total (in kWh) Counties (potential service areas) Towns (potential service areas) Urban/rural classification of preference customers’ reported service areas PMA-provided (in kWh) State total (in kWh) Counties (none) Towns (potential service areas) Urban/rural classification of preference customers’ reported service areas PMA-provided (in kWh) State total (in kWh) Counties (potential service areas) Towns (potential service areas) Urban/rural classification of preference customers’ reported service areas PMA-provided (in kWh) State total (in kWh) Counties (potential service areas) Towns (potential service areas) Urban/rural classification of preference customers’ reported service areas PMA-provided (in kWh) State total (in kWh) Counties (potential service areas) Towns (potential service areas) Urban/rural classification of preference customers’ reported service areas PMA-provided (in kWh) State total (in kWh) Counties (potential service areas) Towns (potential service areas) Urban/rural classification of preference customers’ reported service areas PMA-provided (in kWh) State total (in kWh) Figure IV.27: Potential Service Areas - Virginia Counties (potential service areas) Towns (potential service areas) Urban/rural classification of preference customers’ reported service areas PMA-provided (in kWh) State total (in kWh) Counties (none) Towns (none) Urban/rural classification of preference customers’ reported service areas PMA-provided (in kWh) State total (in kWh) Counties (potential service areas) Towns (potential service areas) Urban/rural classification of preference customers’ reported service areas PMA-provided (in kWh) State total (in kWh) The following are GAO’s comments on the Power Marketing Liaison Office’s letter dated October 30, 1998. 1. DOE comments that our data sources are flawed because we relied on incomplete and/or inaccurate data and that it is impossible to have confidence in conclusions drawn from the data’s analysis. To address each of our objectives, our analyses used data reported by the PMAs and their preference customers—data that we believe to be the best available. DOE recognizes that obtaining complete data on the electric utility industry is not easy. We believe that we used the data appropriately to satisfy the objectives of our review and that our methodology is sound. However, we agree the data we used have some limitations and we have noted the limitations in our report. Many of the concerns that DOE expresses do not deal with the data we used but with the definition of a preference customer of a PMA. For our analysis, we included only the preference customers who purchased power directly from the PMAs—as listed in the PMAs’ 1995 annual reports. We did not include utilities that indirectly purchase PMA power because the 1995 annual reports of two of the three PMAs do not include them in their customer lists. Southwestern’s 1995 annual report states that two of its customers also serve a number of municipal utilities and includes these municipal utilities in the total number of customers. The annual reports of Southeastern and Western, however, list only the customers that buy power directly from those PMAs and do not include the municipal utilities that purchase power from generation and transmission cooperatives or municipal joint action agencies. Because Southeastern and Western together represent over 90 percent of the total preference customers of the three PMAs included in our analysis, we used their approach. However, to address DOE’s concerns, we added statements to the report in several places explaining that our analysis did not include utilities that indirectly purchase PMA power. For our analysis of urban/rural populations, we used the counties and towns that the preference customers included in our rate analysis reported to Electrical World: Directory of Electric Power Producers. In connection with identifying the areas that preference customers report serving, we acknowledge that the data in Electrical World may not match the actual service territories because utilities report to Electrical World the counties and/or towns they serve without specifying the exact service boundaries within these counties and towns. However, we used these data because they (1) were reported by the preference customers and (2) were the best available. We believe this approach adequately addresses our objective of identifying the areas that the three PMAs’ preference customers report serving and does not affect our primary objective, to estimate potential rate impacts by state. 2. DOE states that we omitted from our analysis generation and transmission cooperatives and municipal joint action agencies that purchase power from the PMAs. We did not exclude them. We estimated a potential rate change for every generation and transmission cooperative and municipal joint action agency that purchased wholesale power from Southeastern, Southwestern, and Western in 1995. We also attempted to include them in our maps and urban/rural analysis. However, in many cases, the generation and transmission cooperatives and municipal joint action agencies sell only wholesale power to other utilities and do not provide retail service and, thus, do not report serving any counties or towns. As a result, we were unable to reflect such service territories on our maps. Similarly, since our urban/rural analysis relied on the Census Bureau’s data of populations in the counties and towns that the preference customers report serving, we did not include in our analysis the service territories of the utilities that purchase power from the generation and transmission cooperatives and municipal joint action agencies. As we noted in comment 1, we did not include the generation and transmission cooperatives or municipal joint action agencies in our analysis because the PMAs’ annual reports, with very few exceptions, do not include them either. However, it is important to note that if our rate analysis had included the municipal utilities that buy from preference customer generation and transmission cooperatives and municipal joint action agencies, we believe that the rate increases for many of these utilities would have been very small: If a municipal utility purchased all its power from a direct preference customer of the PMA, the municipal utility’s rate increase would equal the increase we estimated for the direct preference customer. If the utility purchased a portion of its power from sources other than the preference customer, its rate increase would be lower. For example, according to Southwestern’s fiscal year 1995 annual report, Kansas Municipal Energy Agency (Kansas MEA) purchased power from Southwestern and transmitted it to 24 municipal utilities. We estimate that if the Kansas MEA paid market rates for the power it purchased from the PMA, its average rate would rise by 0.22 cent per kWh, a relatively small increase. If a municipal utility purchased all its power from the Kansas MEA, its rate would also rise by 0.22 cents per kWh. If a municipal utility purchased half of its power from the Kansas MEA, its rate increase would be 0.11 cents per kWh. Municipal utilities’ increases would often be small because the direct preference customers who sell them power often purchase a small percentage of their total power from the PMA. 3. DOE states that our maps do not show the service areas of the customers of the generation and transmission cooperatives and municipal joint action agencies. We agree. However, to be consistent with our rate analysis, we included only the counties and towns that the preference customers (those that purchase power directly from the PMAs) report. If we had included the service territories of the utilities that purchase power from preference customers as DOE suggests, our state maps would have had more shadings for counties and/or dots for towns. However, it is important to note that, in many cases, the additional counties and towns in our maps would receive relatively small portions of their power from the PMA. For example, Southwestern’s 1995 annual report states that the PMA sells power to the Louisiana Energy and Power Authority, which, in turn, serves nine municipal utilities. We estimate that Louisiana Electric and Power, purchased 8.15 percent of its power from the PMA in 1995. This means that the nine municipal utilities received, at most, 8.15 percent of their power from Southwestern. If the municipal utilities purchased portions of their power from other sources, the counties and towns they serve would consume a smaller portion of PMA power. Our analysis shows that many of the preference customers that sell power to other utilities purchase less than 10 percent of their power from the PMA. Moreover, regardless of how many utilities buy PMA power indirectly through preference customers, the portion of a state’s electricity consumption that comes from the PMA remains the same—for example, 0.7 percent in Louisiana. 4. In its comments, DOE states that our analysis shows that Southwestern is serving 14 towns and one county in the State of Missouri, with 93 percent of PMA power going to urban areas of the state. We believe that DOE misinterpreted our analysis. Our analysis does not show that 93 percent of Southwestern’s power in Missouri goes to urban areas. Our analysis does show that of the 14 towns that preference customers who buy directly from the PMA report serving, 13, or 93 percent, have populations that are at least 80 percent urban, as defined by the Census Bureau. DOE states that Associated Electric Cooperative has a “PMA power allocation serves rural areas throughout the State of Missouri.” However, this power is distributed to these areas by the utilities that purchase power from Associated Electric, not Associated Electric itself. Associated Electric did not report serving any counties or towns to Electrical World, the source of our data. Moreover, Southwestern, in its 1995 annual report, does not include the utilities that purchase power from the Associated Electric Cooperative in its total count of customers. Therefore, neither did we. 5. DOE states that using 1995 data compromises our rate analysis because (1) PMAs’ rates have recently declined and (2) prices for power purchased during periods of peak use have recently increased. These two factors would increase potential rate increases, but only if market rates remain the same. However, according to officials of the Edison Electric Institute, market rates for wholesale power have also declined since 1995. As the market continues to evolve, many industry experts believe these rates will fall farther. If market rates fall more than the PMAs’ rates, our estimates of rate increases will prove to be overstated. We have seen no evidence that the PMAs’ rates have fallen more than rates in the wholesale market. 6. DOE maintains that power from the Pick-Sloan project will be reallocated to 25 Native American tribes and 11 other new customers in the Upper Midwest in 2001 and that, as a result, our analysis will be “even further outdated.” However, we were asked to examine the three PMAs’ sales, based on the most recent data—1995, not their sales in the future. In addition, although Western may be reallocating its power, this does not necessarily mean that the new allocation would appreciably change the profile of the service areas (in terms of the extent to which they are urban or rural and in terms of their household income). This profile would change only if the areas losing Western’s power are more urban or rural or different in income than the areas that would gain access to Western’s power. Moreover, although Pick-Sloan sold more power than other of Western’s projects, it nonetheless represented about only about one-third of Western’s total sales in 1995. Consequently, the reallocation would have to be very large to significantly change the overall profile of Western’s preference customers’ service territories. 7. DOE states that average rates are not a good proxy for specific power services from PMAs. We acknowledge that average revenue per kWh (total revenues/total electricity sales) is an imperfect indicator of electricity rates because it combines the costs of several types of services, such as capacity, peak service, and off-peak service. However, as we have stated in several past reports, we believe it is a strong, broad indicator of the relative power production costs of the PMAs compared to those of investor-owned utilities and publicly owned generators. We agree that preference customers would likely pay higher than the average rate per kWh in replacing the portion of PMA power that is used during periods of peak demand. 8. DOE states that in many parts of the Upper Midwest and Southeast, it is typical for towns in a county to be served by an investor-owned utility while the remaining parts of the county are served by a rural electric cooperative. Thus, DOE believes that our analysis is flawed if the data do not account for this difference. We agree this may be an issue. However, as stated previously, we relied on the set of counties and towns that the preference customers reported serving to Electrical World. The preference customers did not specify which portions of a county they served when they reported serving a county. Also, in Midwestern states, such as Iowa, Missouri, and Nebraska, preference customers primarily reported their service areas as towns rather than counties. Thus, the problem concerning counties that DOE identified would not arise there. In addition, even if, within a particular county, an investor-owned utility serves a town, it may not follow that the area outside the town has lower household incomes. 9. DOE states that we omitted state/federal agencies. We excluded state and federal agencies because, with a few exceptions, they are not utilities and thus are not in EIA’s Form 861 or “sales for resale” databases. As a result, we could not perform calculations on potential rate impacts with the approach we used for preference customers who are utilities. We excluded state and federal agencies from other analyses because (1) they do not provide retail service to residential end-users and (2) we wanted to keep the group of customers consistent across the analyses. In addition, DOE provides no economic analysis that the PMAs’ sales to these agencies provide a “large benefit to the state they are located in.” In most cases, even if the sales to these agencies were included in the analysis, the PMAs’ portion of a state’s total electricity consumption would be relatively small. We agree that some indirect economic impact may be attributable to the lower price of the power—relative to other retail prices—consumed by the preference customers not included in our analysis, but its measurement is uncertain. 10. DOE states that TVA is omitted. We agree that if we had been able to include the 160 distributors that received TVA power in 1995 in our analysis, the percentages of PMA power provided to the seven states served by TVA would have increased. However, because of data limitations, such as EIA’s designating TVA as an Alabama utility in its Form 861 database and TVA’s not reporting a service territory in Electrical World, we could not apply the methodology used in our analysis and were unable to develop and implement a methodology to appropriately incorporate TVA. However, our draft explained that Southeastern sells power to TVA, provided information on the amount of power that Southeastern sold to TVA in 1995 and 1996, and provided a map of TVA’s service territory. In addition, we have added a more detailed explanation of our methodology concerning TVA in appendix III. 11. DOE believes that Wisconsin should not be included in our analysis because Wisconsin Public Power received only “nonfirm” (interruptible during peak periods) power from Western. However, Western listed Wisconsin Public Power as one of its customers for 1995, and we believe that it was appropriate to include this customer in our rate analysis because we did not differentiate between firm (always available) and nonfirm power sales. Also, because Wisconsin Public Power sells only wholesale power and did not report serving any counties or towns in Electrical World, we could not include it in our other analyses. 12. DOE states that our report does not maintain a neutral description of the findings because the report goes beyond data reporting and does not present all opposing points of view. It cites as an example our observation that in cases where potential rate increase may be relatively large, PMAs currently sell power at relatively low rates and rate caps could be used to mitigate these increases. We believe that our report is balanced and that, throughout our report, we present a neutral description of our objectives and findings. We mention that PMAs’ rates are relatively low to provide context for the relatively large rate increases. It is easier to understand the significance of a rate increase that exceeds 1.5 cents per kWh if the reader understands the base rate upon which the increase is calculated. On the issue of rate caps, we did not intend for our discussion to be a recommendation. We included it because, as in previous reports, this issue has been an important consideration in other deregulatory initiatives. 13. DOE states that our classifications of rates are subjective. We agree. However, we devised the parameters of these classifications on the basis of our examination of all the rate changes in our analysis. Moreover, we explicitly describe the values attached to each of these classifications in our report. We used these categories to simplify the discussion, not as a definitive statement. 14. With regard to selling power to high-income areas, DOE misinterprets our analysis. In the examples cited, we refer to the percentage of households with higher incomes, not the median income. More generally, a county may have a median that is relatively close to the statewide median, yet still have a large portion of households with higher incomes. We agree with DOE’s comment regarding Native Americans’ receiving PMA power and have added an example for balance. 15. With regard to our not reporting rate increases as percentages, we made a subjective judgment not to do so. As we stated in appendix III, we believe that reporting rate changes in cents per kWh more accurately portrays the true value of the changes. In addition, the base rates preference customers paid in 1995 differ greatly from customer to customer. As a result, if we expressed the rate changes as percentages, the same increase measured in cents per kWh would be reported as different increases for two customers with different base rates. 16. In response to DOE’s assertion that our methodology is not conservative, we disagree. We believe our methodology is conservative because we assume no changes in wholesale market prices. If wholesale prices decline in the future, as many industry experts predict, our estimates of rate increases will prove to be overstated. Because we could not incorporate forecasts of wholesale prices, we believe our approach is conservative. 17. DOE states that our urban/rural terminology may be misleading. As suggested, we have included the Census Bureau’s definition of urban in the body of our report and appendix III. Federal Power: Options for Selected Power Marketing Administrations’ Role in a Changing Electricity Industry (GAO/RCED-98-43, Mar. 6, 1998). Federal Electricity Activities: The Federal Government’s Net Cost and Potential for Future Losses (GAO/AIMD-97-110 and 110A, Sept. 19, 1997). Federal Power: Issues Related to the Divestiture of Federal Hydropower Resources (GAO/RCED-97-48, Mar. 31, 1997). Power Marketing Administrations: Cost Recovery, Financing, and Comparison to Nonfederal Utilities (GAO/AIMD-96-145, Sept. 19, 1996). Federal Electric Power: Operating and Financial Status of DOE’s Power Marketing Administrations (GAO/RCED/AIMD-96-9FS, Oct. 13, 1995). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO provided a state-by-state analysis of the preference customers who buy power from the Southeastern Power Administration, the Southwestern Power Administration, and the Western Area Power Administration, focusing on the: (1) extent to which preference customers' rates may change if market rates are charged; (2) areas the three power marketing administrations' (PMA) preference customers report serving; and (3) incomes in these areas and the extent to which they are rural or urban. GAO noted that: (1) overall, slightly more than two-thirds of the preference customers that purchase power directly from the Southeastern, Southwestern, and Western Area power administrations may see relatively small or no rate increases if these PMAs begin to charge market rates for the power they produce; (2) in particular, given GAO's assumptions, almost all of Southeastern's preference customers would see average rate increases of up to one-half cent per kilowatt hour (kWh) on rates that in 1995 typically ranged from 3.5 to 6.0 cents per kWh; (3) most of these preference customers would see increases of less than one-tenth cent per kWh; (4) if the preference customers served by Southeastern pass the higher rates on proportionally to their residential end users, most end users would see their monthly electricity bill increase by less than $1, while the maximum increase would range in most states between $1 and $8, depending on the state; (5) preference customers who receive power from Western may see a variety of rate increases if market rates are charged; (6) as a group, Southwestern's preference customers may see rate increases that lie between those for Southeastern's and Western's customers; (7) most of Southwestern's preference customers may see relatively low rate increases of up to one-half cent per kWh on rates that typically ranged between 1.5 and 3.5 cents per kWh; (8) however, almost all preference customers in Oklahoma may see larger rate increases that exceed 1.5 cents per kWh; (9) in general, a preference customer's rate increase depends primarily on what portion of its total power comes from the PMA and how close the PMA's rate is to the market rate; (10) preference customers included in GAO's analysis that purchased power directly from the PMAs serve varying portions of 29 states; (11) the populations in the areas preference customers serve generally have median incomes that are similar to the median income in the entire state; (12) in about two-thirds of the states GAO examined, the preference customers serve counties and towns whose median household incomes are within 15 percent of the statewide median income; (13) however, in some states, preference customers primarily serve poorer areas and households; (14) nationwide, about half of the towns that preference customers serve are urban and about half are rural; and (15) most of the counties are mixed, about 40 percent are rural, and the remainder are urban.
DOD has voluntary education programs in place to facilitate educational opportunities for service members to pursue postsecondary education during off-duty time. Program oversight for voluntary education programs is the responsibility of the Undersecretary of Defense for Personnel and Readiness. In addition, the military services are responsible for establishing, maintaining, operating, and implementing the programs at 350 education centers on military installations worldwide. Education centers are managed by an education services officer (ESO) and staff, such as education guidance counselors. Service members must meet certain requirements in order to participate in the program. These requirements include consulting with a counselor in order to develop an education goal and degree plan, maintaining a 2.0 grade point average (GPA) for undergraduate-level courses, and maintaining a 3.0 GPA for graduate-level courses. In accordance with DOD policy, tuition assistance covers up to $250 per credit hour, with a maximum of $4,500 per year. In fiscal year 2009, the military services’ TA program expenditures were $517 million, as shown in figure 1. In order to receive TA funds, DOD requires postsecondary institutions to be accredited by an agency recognized by Education. Accreditation is a peer review evaluative process that compares a school against its accrediting agency’s established standards. The accrediting agency conducts institutional reviews to assess the school in its entirety, including its resources, admissions requirements, and services offered, and the quality of its degree programs. The schools’ accreditation is then periodically reevaluated every 3 to 10 years, depending on the accrediting agency. Schools may lose accreditation if their accrediting agency determines that they no longer meet the established standards. Since 1972, SOC has enhanced educational opportunities for service members. SOC, a consortium of approximately 1,900 colleges and universities, is funded by DOD through a contract with the American Association of State Colleges and Universities (AASCU). SOC functions in cooperation with 15 higher-education associations, DOD, and active and reserve components of the military services to expand and improve voluntary postsecondary education opportunities for service members worldwide. SOC criteria stipulate that school policies and practices be fair, equitable, and effective in recognizing the special conditions faced by military students, such as trouble completing college degrees because of their frequent moves. Colleges and universities within SOC must have policies that meet four SOC criteria relating to transfer of credit, academic residency requirement, credit for military training and experience, and credit for nationally recognized testing programs. In addition, they must also follow SOC’s three principles: (1) service members should share in the postsecondary educational opportunities available to other citizens; (2) educational programs for service members should rely primarily on programs, courses, and services provided by appropriately accredited institutions and organizations; and (3) institutions should maintain a necessary flexibility of programs and procedures, such as recognition of learning gained in the military and part-time student status. Since 1991, DOD’s Military Installation Voluntary Education Review (MIVER) process has provided an independent third-party assessment of the quality of postsecondary education programs offered to off-duty service members at military installations around the world. DOD contracted with the American Council on Education (ACE) to administer the MIVER. The MIVER had two purposes: (1) to assess the quality of selected on-installation voluntary education programs and (2) to assist in the improvement of such education through appropriate recommendations to institutions, installations, DOD, and the military services. To assess the quality of education programs offered by schools on installations and to ensure that these program are comparable to those offered at a school’s other campuses, MIVER assessed schools’ missions, education programs, program administration, resources, and program evaluation. The MIVER also examined the installations’ mission statements and command support, program management and leadership, student services, resources, and the voluntary education program plans to determine the quality of their education programs and services. A visiting team composed of college and university professors selected by the contractor evaluated the quality of educational services and support provided by the installation’s education center and servicing institutions. The MIVER provided installations and schools with commendations for their areas of strength, and recommendations for areas needing improvement. It also provided the military services with observations on issues that require the military services’ attention. MIVERs were for the purpose of quality assessment and enhancement only; these reviews were not intended to replace institutional accreditation. The MIVER contract with ACE expired on December 31, 2010, and DOD elected not to renew the contract because it is expanding the scope of these reviews, but DOD is currently in the process of obtaining a new contract for its reviews. According to DOD, a contractor will be selected in 2011 and the new third-party review process will commence on October 1, 2011. On August 6, 2010, DOD published a proposed rule for its voluntary education programs in the Federal Register for public comment. Included in this rule, among other things, are guidelines for establishing, maintaining, and operating voluntary education programs, including instructor-led courses offered on and off installations, distance education courses, and the establishment of a DOD Voluntary Education Partnership Memorandum of Understanding (MOU) between DOD and all educational institutions receiving TA funds. DOD estimates that this new rule will become effective at the beginning of 2012. While Education does not have a role in overseeing DOD education programs, it is responsible for the administration of the federal student aid programs under Title IV and oversees over 6,000 postsecondary institutions receiving these funds. Education determines which institutions of higher education are eligible to participate in Title IV programs, which include the following: Public institutions—institutions operated and funded by state or local governments, which include state universities and community colleges. Private nonprofit institutions—institutions owned and operated by nonprofit organizations whose net earnings do not benefit any shareholder or individual. These institutions are eligible for tax-deductible contributions in accordance with the Internal Revenue Code (26 U.S.C. § 501(c)(3)). For-profit institutions—institutions that are privately owned or owned by a publicly traded company and whose net earnings can benefit a shareholder or individual. audit reports and may impose penalties or other sanctions on schools found in violation of Title IV requirements. DOD policies and procedures to oversee schools receiving TA funds vary based on the school’s level of involvement in the program. While DOD monitors enrollment patterns and schools’ funding levels, and addresses complaints about postsecondary schools on a case-by-case basis, its oversight activities do not include a systematic approach that considers these factors when targeting schools for review. At a minimum, all postsecondary schools receiving TA funds are required to be accredited by an agency recognized by the Department of Education to ensure the quality of education programs being offered to its service members. Schools that are members of the SOC consortium or offer classes on an installation are subject to additional DOD oversight, as shown in figure 2. Schools that elect to become members of the SOC consortium must comply with SOC principles and criteria, which promote institutional flexibility with regard to transfer of credits, the development of programs and procedures appropriate to the needs of service members, and safeguarding the quality of educational programs offered to service members. SOC also reviews member schools’ student loan default rates and verifies their accreditation status every 2 years, according to a SOC official. In addition, SOC considers recruitment practices such as high- pressure promotional activities and “limited time only” enrollment discounts inappropriate activities for its member institutions to engage in. According to a SOC official, SOC will submit a formal complaint to the school’s accreditor when it becomes aware of serious violations of prohibited marketing practices. Schools offering classes on an installation are subject to additional oversight measures. Aside from accreditation and mandatory membership of SOC institutions that provide academic courses on military installations, schools are subject to additional oversight measures including state licensure, MIVER quality reviews, and the terms and conditions of an individualized MOU with the installation commander. The MOU governs the school’s operations on an installation; for example, it can cover reporting requirements on course offerings and the maintenance of student data such as course grades and degrees completed. Education center officials at two installations we visited reported that they stay in constant contact with on-installation schools and review relevant information such as school term schedules and class rosters to ensure that schools comply with their MOUs. If a school does not comply with the MOU requirements, the installation commander can require the school to leave the installation, according to education center officials at two of the installations we visited. In general, DOD and its military services’ oversight of schools is based on a school’s level of program participation rather than a risk-based approach. To address the varying levels of oversight and create a more uniform set of program oversight policies, DOD has developed a new standard MOU for all schools receiving TA funds. Under the new MOU, all schools will be required to, among other things, abide by SOC principles and criteria and provide an evaluated educational plan to service members. DOD estimates that this new rule will be implemented at the beginning of 2012. The MIVER was limited to institutions that offer face-to-face courses at military installations. While distance learning courses accounted for 71 percent of courses paid for with TA funds in fiscal year 2009, DOD did not have a review process in place to assess the quality of these institutions. In addition, quality reviews were not conducted at all installations. According to DOD officials, since the MIVER process was first initiated, in 1991, all Marine Corps installations were visited, while only a portion of installations of the other military services were reviewed (86 percent of Navy installations, 56 percent of Army installations, and 30 percent of Air Force installations). Under the expanded review process that is being developed, all institutions receiving TA funds will be subject to a new third-party review process—a Military Voluntary Education Review (MVER)—regardless of whether the school delivers courses face to face or by distance education. In addition, DOD officials said that schools will be selected for the MVER process based on the amount of TA funds they receive. DOD has relied on MIVER to evaluate the quality of the education services being provided to its service members at installations; however, three of the four services lacked a process to follow up on and respond to the findings of the MIVER process. During the MIVER process, reviewers developed a report listing their recommendations, commendations, and observations of the educational services provided by the installation it was reviewing and the institutions offering courses at that installation. MIVER final reports were distributed to the institutions and installations that were reviewed as well as DOD officials and its military services. The Army was the only military service that required installations that received a MIVER visit to submit a follow-up report indicating actions taken in response to the MIVER review. The Air Force recognizes the importance of having such a process and was considering adopting a policy that would implement a formal process of tracking and following up on items mentioned in MIVER reports. The Navy and Marine Corps reported that they did not have a formal process requiring their installations to track the outcome of MIVER recommendations, commendations, and observations. These military services also reported that they review and maintain copies of all MIVER reports. One DOD official reported that MIVER reports were helpful in identifying the strengths, weaknesses, and areas for improvement in DOD educational programming. Additionally, according to ESOs we interviewed, some MIVER recommendati ons were implemented with successful results. For example, an ESO told us that some of the Navy installations implemented a MIVER recommendation to strengthen their coordination with nearby schools. Given that there was no DOD-wide requirement to track the outcomes of MIVER recommendations and some of the military services did not require schools and installations to formally respond to MIVER findings, it is unclear to what extent recommendations that could improve the quality of education services offered at schools and installations were addressed. There is currently no such requirement in place for its new third-party process, according to DOD officials. While DOD has several mechanisms for service members to report problems associated with their TA funding, it lacks a centralized system to track these complaints and how they are resolved. If service members have a complaint or issue regarding a school, they can speak with a counselor at their installation’s education center, contact a representative from SOC, use the call center service, or use the Interactive Customer Evaluation (ICE). According to DOD officials, DOD’s practice is to have ESOs and education center staff resolve complaints at the installation level and to only elevate issues that warrant greater attention at the military service level. However, DOD and its military services do not have a formal process or guidance in place for when ESOs should elevate a complaint to their military service chief or DOD. DOD reported that most of the complaints it receives are administrative in nature, but a few complaints involve improper or questionable marketing practices. ESOs we spoke with reported that the most frequent complaints they receive from all sources tend to be administrative, such as billing issues. These complaints are often handled directly by counseling staff at the education offices and are generally resolved immediately at the installation level, according to DOD officials. ESOs told us that they also receive complaints about improper or questionable marketing practices by schools receiving TA funds. ESOs and their staff mentioned cases where school representatives have conducted marketing activities at installations without the installation commander’s or ESO’s permission. Although the ESOs do not maintain an official record of all complaints, ESOs we spoke with recalled that most of the instances of a school engaging in improper or questionable marketing practices have involved for-profit schools. They provided us with documentation of a few examples of these complaints. In one case, a for-profit school was found to be charging higher tuition rates to service members than civilians and offering service members $100 gas cards upon course completion. The ESO at the installation where this incident occurred told us that this issue was resolved by speaking with school officials and an accrediting agency. An official also told us that another for-profit school representative continually called and e-mailed a service member during day and evening hours after he elected not to attend that institution. SOC also helps DOD and its military services in resolving complaints. SOC produces and disseminates quarterly reports to the voluntary education service chiefs of each of the military services to inform them of the issues that SOC has addressed on behalf of DOD and its military services. SOC addresses various administrative matters such as answering questions from schools and service members about the TA program. A SOC official told us that SOC also resolves complaints involving aggressive marketing, claims of unfair grading, and issues relating to deployment and transfer of credit between institutions. For example, SOC intervened on a student’s behalf and successfully secured transfer of credits when a school failed to honor its agreement with the service member to do so upon course completion. Education center staff elevate issues that cannot be handled locally to the military service chief level, but DOD does not have specific guidance explaining when to do so. When a school distributed flyers and e-mails at an installation to advertise courses it planned to offer on-installation without an MOU and misrepresenting the number of credits service members would receive from taking the school’s courses, DOD officials and SOC were notified of these activities. In response to these issues, DOD shared its concerns and copies of the school’s marketing materials with Education. Additionally, SOC filed a complaint with the school’s accrediting agency. Education planned to review the school’ marketing materials, and the accreditor plans to hold a meeting to determine the appropriate actions to address SOC’s complaint, according to DOD officials. DOD’s Interservice Voluntary Education Working Group serves as a forum for service officials to share information, including complaints they might be made aware of, with DOD headquarters officials. The group, with representation from each military service, meets quarterly to discuss various DOD voluntary education-related issues and share information among the four military services. Despite such examples of complaints being referred up the chain of command, one military service official said that it is difficult to establish policy on how to handle every complaint or issue that may arise. Without polices and a centralized system to track complaints and their outcomes, DOD may not have adequate information to assess trends across its military services or determine whether complaints have been adequately addressed. Education center staff and school representatives outlined several areas that could improve program oversight—(1) requiring schools to offer distance learning tutorials, and (2) developing a uniform installation access policy for schools. Require schools to offer a distance learning tutorial: Officials at the military education centers we visited suggested that the availability of a distance learning tutorial for all service members accessing online courses is important to ensure that service members successfully complete these courses. Because of the mobile nature of a service member’s life, online education offerings provide an opportunity for service members to access and complete postsecondary courses. However, counseling staff and school representatives we interviewed at one installation reported that some service members have had difficulty using the course software to access discussion boards and/or submit assignments because they had not previously taken an Internet-based course. Officials from one of the institutions we spoke with told us that they offer online tutorials and technical support for their distance learning courses, and participation in the online tutorial is strongly encouraged. Uniform installation access policy for schools: School representatives we met with suggested that DOD establish a uniform installation access policy for all schools participating in the TA program. Installation access policies are determined at the installation level by the ESO and installation commander, and these policies tended to vary with installations we visited. In addition to schools that offer courses on an installation and have a signed MOU, some schools are granted access to the installation by the ESO as visiting schools. These schools do not offer courses on an installation but instead offer periodic office hours and academic support for the students they serve at that installation. At one installation we visited, the ESO grants access to only a few visiting schools and requires that they all sign an MOU outlining the terms of their operations on an installation. However, at another installation we visited, the ESO allows any school that currently serves students on an installation to hold office hours with the education center’s approval. A few school representatives expressed concerns about their limited or lack of installation access to support their students. While DOD coordinates with accrediting agencies, it does not use accrediting agencies’ monitoring results or consider schools’ unapproved substantive changes as it carries out its oversight. DOD officials told us they communicate with accrediting agencies through SOC to verify accreditation, and report complaints or problems with schools. SOC, on the behalf of DOD, contacts accrediting agencies biannually to verify the accreditation status of its member institutions, according to a SOC official. DOD and its military services officials reported that they also contact accrediting agencies directly or through SOC when they cannot resolve complaints against schools. For example, one military service worked with SOC to file a complaint against a school when it found that a school was falsely marketing its courses to its service members. According to DOD, this complaint led to an investigation into the matter by the school’s accrediting agency. DOD also reported that it holds annual meetings with accrediting agencies to discuss DOD policies and procedures and the delivery of educational programs to its military services. DOD’s oversight process does not take into account accrediting agencies’ monitoring results of schools that could negatively affect students and service members. Schools can be sanctioned by accrediting agencies when they fail to meet established accrediting standards, such as providing sound institutional governance, providing accurate information to the public, and offering effective educational programs. For example, on the basis of an accrediting agency’s monitoring results that were publicly available, a school was warned it could be at risk of losing its accreditation in part because it lacked evidence of a sustainable assessment process to evaluate student learning. The school was required to submit a report to the accrediting agency providing evidence of its process and that the results were being used to improve teaching, learning, and institutional effectiveness. According to accrediting agency officials, schools are given multiple opportunities to correct deficiencies before having accreditation revoked and can be sanctioned for up to 2 years. DOD does not currently require schools to have their substantive changes approved by their accrediting agency in order to receive TA funds. Schools may introduce new courses or programs significantly different from current offerings, and such changes may be considered substantive and outside the scope of an institution’s accreditation. Unlike DOD, Education requires a school to obtain its accrediting agency’s approval on any substantive change and report this information to Education for approval before it can disburse Title IV funds to students enrolled in new courses or programs considered to be substantive changes. Education requires accrediting agencies to have substantive change policies in place to ensure that any substantive change to an institution’s educational mission or programs does not adversely affect its capacity to continue to meet its accrediting agency’s standards. DOD recently proposed that tuition assistance funds should be available for service members participating in accredited undergraduate or graduate education programs and that approved courses are those that are part of an identified course of study leading to a postsecondary certificate or degree. According to Education, schools seeking Title IV funds generally wait for approval before enrolling students in such new courses and programs, but can collect other federal education assistance and out-of-pocket funds during that time. Students enrolled in unapproved courses or programs have less assurance that they are receiving a quality education, according to Education officials. On the basis of Education’s fiscal year 2009 Program Compliance Annual Report, we determined that there were over 1,200 substantial changes processed in fiscal year 2009. DOD coordinates with Education to some extent but does not utilize Education’s compliance data to oversee schools receiving TA funds. The extent of DOD’s coordination with Education has generally been limited to accreditation status. According to DOD officials, DOD regularly searches Education’s Web site to verify schools’ accreditation status, and utilizes Education’s resources for counseling students on federal student aid. In addition, DOD reported that it invited Education officials to attend its Interservice Voluntary Education Working Group meeting in September 2010 to discuss future changes to the accreditation process. However, DOD does not utilize information from Education’s monitoring reviews to inform its oversight efforts. This information can alert DOD to problems at schools that may affect the quality of education provided to students, including service members. Education determines schools’ initial eligibility to participate in federal student aid programs through eligibility reviews and continuing eligibility through program reviews, compliance audits, and financial audits. The results of these oversight measures provide additional insight into a school’s financial stability, quality of education, and compliance with regulations that provide consumer protections for students and the federal investment. See table 1 for a summary of these oversight activities. The results of these oversight measures can provide DOD and its military services with additional insight into a school’s ability to provide a quality education and services to students. Schools that are financially unstable or fail to comply with student loan default rate and 90/10 requirements may be unable to fulfill their promises to provide students with quality program offerings, according to Education. While DOD monitors default rates through SOC, it does not formally monitor 90/10 information. Military education center staff we spoke with at two military installations indicated that ensuring the consumer protection of service members amidst sometimes deceptive recruiting practices of some schools can be a challenge. Education’s monitoring results in these areas could provide relevant information to help DOD and its military services to better target their oversight and provide additional consumer protection for service members. Education has recently developed additional provisions to better address oversight in distance education. Education has developed a review process and guidance for its staff to assess the integrity of distance learning programs, such as whether schools have a process to verify student attendance. DOD has proposed that distance education schools be subject to MVER reviews, but currently does not generally evaluate these courses. DOD may be able to leverage information from Education’s ongoing efforts in this area. In part because of inconsistencies in states’ authorization requirements for schools, Education recently clarified what is required for institutions of higher education to be considered legally authorized by a state. Under new regulations that will generally take effect in July 2011, states must, among other things, have a process to review and address complaints about institutions authorized by the state. In addition the new regulations require that if an institution is offering postsecondary education through distance or correspondence education in a state in which it is not physically located, the institution must meet any state requirements for it to be legally offering distance or correspondence education in that state. Unlike Education, DOD does not verify that all schools receiving TA funds have state authorization; it only verifies state authorization for on- installation schools. Since DOD reported that it has not had the opportunity to fully review Education’s new rule regarding state authorization, it is unclear whether it will follow those requirements. Education has partnerships with a number of other federal agencies, including the Department of Justice and the Federal Trade Commission. Education partners with these two agencies to share information on complaints and college scholarship and financial aid fraud. Additionally, Education has a Federal Agency Advisory Working Group to facilitate its coordination with other federal agencies and told us that it is willing to share information and provide guidance to DOD in real time. In fiscal year 2009, nearly 377,000 service members relied on TA funds to help further their academic and professional goals. Schools that offer distance learning courses play an ever increasing role in helping students achieve these goals. The amount of TA funding going toward distance learning programs creates new oversight challenges for DOD and its military services, especially since DOD oversight has primarily focused on schools offering traditional classroom instruction on military installations. Increased oversight is needed to remedy gaps in the accountability of the quality review process and the process to address complaints against schools. Although DOD has plans to improve its oversight of schools receiving TA funds, without accountability measures for its quality review process, DOD cannot be certain its efforts to safeguard TA funds will be effective. In addition, while DOD is aware of some concerns regarding schools’ improper recruiting practices, without a centralized process to track complaints against schools and their resolution, DOD lacks the ability to accurately determine trends in areas requiring oversight and whether concerns have been adequately addressed. DOD could further enhance its oversight efforts by leveraging resources and information that accrediting agencies and Education already collect. For example, the additional consumer protections provided by Education’s regulations on schools’ substantive changes could provide DOD with additional assurance that TA funds are going toward courses and programs that have been properly vetted by the schools’ accreditors. Without leveraging these additional oversight tools, DOD and its military services may lack key information that could help strengthen and inform future program oversight. Targeted improvements in these areas may help DOD and its military services to better ensure that TA funds are being properly utilized and services members are receiving quality education. We recommend that the Secretary of Defense direct the Undersecretary of Defense for Personnel and Readiness to take the following 5 actions to improve its oversight of schools receiving TA funds: 1. To improve the accountability of DOD, its military services, their installations, and participating postsecondary schools in developing its new third-party review process, require all schools, installations, and the military services to formally respond in writing to related recommendations pertaining to them, and develop a process to track and document the status of all recommendations for improvement. 2. Evaluate ways to develop a centralized process to record and track the status and outcomes of complaints. This should be done in a way that balances the need for a comprehensive tracking system with, to the extent possible, minimizing the reporting burden placed on education center staff at military installations. 3. Undertake a systematic review of its oversight of schools receiving TA program funds. In doing so, the Undersecretary of Defense for Personnel and Readiness should consider the following: developing a more systematic risk-based approach to oversight by utilizing information from accrediting agencies and Education to better target schools, modifying its proposed standard MOU to include an explicit prohibition against school conduct that may adversely affect service members, such as misrepresentation, and reviewing Education’s recently promulgated requirements for state authorization of schools and coordinate with Education to determine the extent to which these requirements are useful for overseeing schools receiving TA funds. 4. Prohibit TA funds from being used to pay for courses and programs that are not included within the scope of an institution’s accreditation. This could include leveraging Education’s knowledge and expertise to determine the extent to which other substantive changes listed in Education’s regulations are applicable to the military education programs. 5. Require and verify that all schools receiving TA funds are authorized by their state. We provided a draft of this report to DOD and Education. DOD’s written comments are reproduced in appendix III. DOD agreed with our recommendations and noted steps it would take to address them. Additionally, DOD and Education provided technical comments on the draft. We incorporated each agency’s comments as appropriate. We are sending copies of this report to relevant congressional committees, the Secretary of Defense, the Secretary of Education, and other interested parties. In addition, this report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7215 or scottg@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. To address our objectives, we reviewed and analyzed relevant federal laws, regulations, and program documents and data, including program participation and expenditure data from the Department of Defense (DOD) and its military services. We also reviewed the Department of Education’s (Education) monitoring results to report on cases where schools were not in compliance with Title IV requirements. We interviewed officials from DOD, its military services, and contractors—Servicemembers Opportunity Colleges (SOC) and the American Council on Education. We conducted site visits to education centers located at military installations of the four military services to gain a better understanding of how the program is implemented. We selected these sites based on whether the sites had a mix of public, private nonprofit, and for-profit schools offering classes or held office hours at the installations. We visited one installation per military service—Joint Base Andrews, Fort Carson, Marine Corps Base Quantico, and Naval Station Norfolk. During our site visits, we toured the education facilities and interviewed education center staff and representatives from 16 schools across the four installations that we visited. (See app. II.) We interviewed Department of Education officials to determine the extent to which they coordinate with DOD as part of DOD’s efforts to oversee its Military Tuition Assistance (TA) program. Finally, we interviewed representatives from an association of colleges and universities (Council for Higher Education Accreditation) and selected accrediting agencies (the Distance Education and Training Council and the Higher Learning Commission) in order to obtain information about the extent to which they coordinate and provide information to DOD and its military services for monitoring schools. Overall, we assessed the reliability of these data by reviewing existing information about the data and the system that produced them and interviewing agency officials knowledgeable about the data. We determined the data to be sufficiently reliable for the purposes of this report. We conducted this performance audit from August 2010 to February 2011 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the above contact, Tranchau (Kris) Nguyen (Assistant Director), Raun Lazier (Analyst-in-Charge), James Bennett, Jessica Botsford, Susannah Compton, Catherine Hurley, Edward (Ted) Leslie, Katya Melkote, and Luann Moy made significant contributions to this report.
In fiscal year 2009, the Department of Defense's (DOD) Military Tuition Assistance (TA) Program provided $517 million in tuition assistance to approximately 377,000 service members. GAO was asked to report on (1) DOD's oversight of schools receiving TA funds, and (2) the extent to which DOD coordinates with accrediting agencies and the U.S. Department of Education (Education) in its oversight activities. GAO conducted site visits to selected military education centers and interviewed officials from DOD, its contractors, Education, accrediting agencies and their association, and postsecondary institutions. DOD is taking steps to enhance its oversight of schools receiving TA funds, but areas for improvement remain. Specifically, DOD could benefit from a systematic risk-based oversight approach, increased accountability in its education quality review process, and a centralized system to track complaints. DOD does not systematically target its oversight efforts based on factors that may indicate an increased risk for problems, such as complaints against schools or the number of service members enrolled at a school. Instead, DOD's oversight policies and procedures vary by a school's level of program participation, and schools that operate on base are subject to the highest level of oversight. DOD plans to implement more uniform oversight policies and procedures, but they are not expected to take effect until 2012. In addition, the process DOD used to review the academic courses and services provided by schools and military education centers was narrow in scope and lacked accountability. The review was limited to schools offering traditional classroom instruction at installations and did not include distance education courses, which account for 71 percent of courses taken in fiscal year 2009. The contract for these quality reviews expired on December 31, 2010, and DOD plans to resume its reviews on October 1, 2011, when a new contractor is selected. DOD is developing an expanded quality review process and plans to select schools based, in part, on the amount of TA funds received. With regard to accountability, DOD's review process provided recommendations that could improve educational programming, but there is no DOD-wide process to ensure that these recommendations have been addressed. Furthermore, DOD lacks a system to track complaints about schools and their outcomes. As a result, it may be difficult for DOD and its services to accurately identify and address any servicewide problems and trends. DOD's limited coordination with accreditors and Education may hinder its oversight efforts. DOD verifies whether a school is accredited; however, it does not gather some key information from accreditors when conducting its oversight activities, such as whether schools are in jeopardy of losing their accreditation. Accreditors can place schools on warning or probation status for issues such as providing inaccurate information to the public and poor institutional governance. Schools can experience various problems within the 3- to 10-year accreditation renewal period, and these problems can negatively affect students, including service members. Additionally, DOD does not require schools to have new programs and other changes approved by accrediting agencies in order to receive TA funds. Currently, students enrolled in unapproved programs or locations are ineligible to receive federal student aid from Education, but can receive TA funds. DOD's coordination with Education has generally been limited to accreditation issues and Education's online resources about schools and financial aid. DOD does not utilize information from Education's school-monitoring activities to inform its oversight efforts. Education's findings from program reviews and financial audits of schools provide insights about schools' financial condition, level of compliance, and governance. Collectively, this information could provide DOD with information that can be used to better target schools for review or inform other oversight decisions. GAO recommends that DOD (1) improve accountability for recommendations made by third-party quality reviews, (2) develop a centralized process to track complaints against schools, (3) conduct a systemic review of its oversight processes, (4) take actions to ensure TA funds are used only for accreditor-approved courses and programs, and (5) require and verify state authorization for all schools.
An estimated 5 million illegal aliens resided in the United States in 1996, according to INS. Official estimates, however, are not available on the number of children born to illegal aliens in the United States. Illegal alien parents may apply on behalf of their children for those federal welfare benefits to which their children are entitled as citizens. A household composed of an illegal alien parent and a citizen child gains access to federal welfare benefits by virtue of the child’s eligibility. The AFDC, Food Stamp, and SSI programs generally do not provide direct payment of benefits to minors—children under 18—requiring that their benefits be paid through an authorized representative payee, typically the custodial parent. In such cases, the citizenship status of the parent is not a consideration in deciding who the payee should be. The rationale is that the parent of an eligible child is in the best position to make decisions on how benefits should be spent on behalf of his or her child. For housing assistance, HUD provides funds to a public housing authority or owner of a housing unit to subsidize the rent for an eligible household. Under HUD rental programs, a household composed of an illegal alien and a citizen would be eligible for assistance if the citizen met eligibility criteria and assistance was available. Although illegal alien parents are not eligible for assistance, their income and assets are taken into account when determining the eligibility of and benefit amounts for their citizen children. Table 1 shows the average monthly benefit amounts under the various programs. Recipients often receive assistance from more than one program. In 1995, about 87 percent of AFDC households also received Food Stamp benefits and 31 percent received housing assistance. No individual may receive both AFDC and SSI benefits. The 1996 welfare reform legislation made sweeping changes to welfare programs for needy families, but it did not directly affect the eligibility of illegal aliens’ citizen children. Although TANF block grants, which replaced AFDC, will allow states more flexibility in structuring their programs, federal and state officials stated that U.S. citizen children of illegal aliens will remain eligible for assistance. The provision in the welfare reform law that requires reporting of illegal aliens to INS, however, may have an impact in the longer term. Prior to the legislation, AFDC, SSI, and housing assistance programs generally were not required to report illegal aliens to INS. The new provision requires that states operating TANF programs, the Commissioner of SSA, and the Secretary of HUD periodically provide information to INS on any individual they know is unlawfully in the United States. Federal officials stated that an interagency workgroup is presently determining what level of evidence will be required to establish that someone is known to be unlawfully present in the United States, as well as reporting procedures. No time frame, however, was available for when agencies and states are to begin reporting known illegal aliens to INS. If the final regulations for this reporting affect illegal aliens acting as payees for their U.S. citizen children, some illegal aliens could be discouraged from seeking benefits for their eligible children. Also, the Congress is considering legislation that would deny citizenship to children born in the United States to a parent who is not a citizen or lawful permanent resident. In fiscal year 1995, an estimated $1.13 billion—$700 million under the AFDC program and $430 million in Food Stamp benefits—was provided to households in which either the head of household or his or her spouse was an illegal alien. These benefits were provided to illegal alien parents for the well-being of their U.S. citizen children. The payments represent about 3 percent of total AFDC benefit costs and about 2 percent of total Food Stamp benefit costs. Approximately 153,000 AFDC households—with 300,000 citizen children—and 224,000 Food Stamp households—with 428,000 citizen children—had an illegal alien as the head of household or spouse of the head of household. In many cases, these estimates reflect the same households and citizen children, since 94 percent of the AFDC households with an illegal alien parent also received Food Stamp benefits and 65 percent of the Food Stamp households with an illegal alien parent also received AFDC. A summary of estimated benefits provided to these households in fiscal year 1995, by program, is shown in table 2. About 77 percent of AFDC and 78 percent of Food Stamp households with an illegal alien parent had one or two citizen children; the remaining households had three or more citizen children receiving benefits. In addition, while most of the illegal alien parent households had only citizen children in the households, a significant portion—23 percent of AFDC and 29 percent of Food Stamp recipients—had both eligible citizen children and noneligible illegal alien children. SSA does not have any comprehensive data on the number of U.S. citizen children of illegal aliens receiving SSI benefits. Based on the limited data available, we estimated that as of December 1996, at least 3,450 disabled U.S. citizen children of illegal aliens received benefits at an annualized federal benefit cost of about $17.6 million. SSA officials explained that readily available data cannot be used to accurately estimate the total number of cases in which an illegal alien parent received benefits on behalf of citizen children because the citizenship status of payees is not uniformly identified in SSA’s automated systems. Similarly, HUD does not have any data that would allow for an estimate of the number of households in which illegal aliens are receiving rental housing assistance for the benefit of U.S. citizen children. Before June 1995, citizenship status was not considered when determining the eligibility of individuals for HUD’s various rental assistance programs and such information was not collected or maintained on participants. However, recently implemented regulations and provisions included in the immigration reform legislation prohibit HUD from providing rental assistance to persons other than U.S. citizens and certain qualified noncitizens. HUD has begun redesigning its automated databases and data collection instruments to capture information on participants’ citizenship and alien status. However, this process is ongoing and the agency is not yet able to report the level of assistance being provided to households composed of both illegal aliens and eligible U.S. citizen children. Most illegal aliens receiving AFDC or Food Stamp benefits on behalf of U.S. citizen children are located in only a few states. Over 85 percent of the households with children of an illegal alien parent receiving AFDC are located in California, Texas, New York, and Arizona. (See fig. 1.) The distribution of Food Stamp households with an illegal alien parent is only slightly different, with 54 percent of the cases in California, 23 percent in Texas, and 4 percent in Arizona. In addition, the majority of SSI cases of illegal alien payees for citizen children that records allowed us to identify were located in California and Texas. In California, households composed of an illegal alien parent and citizen children represented about 10 percent of the state’s AFDC and Food Stamp caseloads in 1995 and accounted for $720 million in AFDC and Food Stamp benefits combined. Other studies from the California counties of Los Angeles and Orange estimated that these households have constituted up to 20 percent of each county’s AFDC caseload in recent years. In the other states for which we developed estimates, illegal alien payee cases ranged from 4 to 7 percent of each state’s AFDC and Food Stamp caseloads. (See app. I for more details on the estimated number of households and benefits provided by state and the associated sampling errors.) Although procedures are in place to prevent and detect fraud, comprehensive national statistics on fraud perpetrated by illegal aliens serving as payees on behalf of their citizen children are not available. However, studies of AFDC households in a few California counties with large populations of illegal aliens serving as payees indicate that there is little difference in the rate and type of misrepresentation or fraud detected for them and other households receiving benefits. To prevent and detect misrepresentation or fraud, federal, state, and local agencies use various approaches in processing applications for benefits, ensuring the continued eligibility of recipients, and maintaining payment accuracy for the AFDC, Food Stamp, and SSI programs. While each of these programs has different goals, all require individuals or families to meet certain eligibility criteria. To establish program eligibility, proof of citizenship and a social security number typically must be presented for all applicants, including U.S. citizen children of illegal aliens. In addition, since these are means-tested programs, the income and resources of an applicant’s household cannot exceed specific limits set by each program. Benefits, based on total household income, are then computed for the eligible family members. The amount of household income and other resources are verified at the time of application and, for successful applicants, periodically thereafter to ensure continued eligibility and payment accuracy. Applicants must provide proof of income and resources such as pay stubs, vehicle registration forms, and rental agreements. For the AFDC, Food Stamp, and SSI programs, officials access the Income and Eligibility Verification System or use computer matching with other databases to corroborate information provided by applicants. In addition to the verification procedures used during the application process and periodic reviews, some states take further steps to aid in detecting and preventing misrepresentation or fraud. For instance, all AFDC applicants in New York City are required to participate in office interviews and home visits by investigative staff to validate application information. As a result of these investigations, approximately 35 percent of new applicants never received benefits, according to city officials. In California and Texas, cases are referred to investigators for additional reviews, including home visits, if fraud is suspected. Although the officials we spoke with generally agreed that intensive screening is effective, it is also resource intensive and costly. Under the AFDC and Food Stamp programs, all states have been required by federal regulations to conduct quality control reviews of a sample number of cases to ensure that benefit amounts are correct. These reviews include verification of eligibility and income data; if fraud is suspected, a referral for investigation is made. Although the quality control program is not a requirement under TANF, states may continue the program at their option. In addition to the application and review procedures, some federal agencies, states, and localities train staff to identify fraudulent documents and provide updates on the latest counterfeit documents. For example, SSA staff use black light equipment to determine whether documents submitted in support of SSI benefit claims are authentic. Staff are also trained to use interview techniques to better identify misrepresentation by applicants. National studies on the nature and extent of misrepresentation or fraud by illegal aliens obtaining benefits for their citizen children are not available. However, three California counties—Fresno, Los Angeles, and Orange—have experienced rapid growth in their AFDC child-only cases (those without an adult recipient) and, in recent years, began conducting studies to investigate fraud among child-only and other cases. Although these studies used a much broader definition of fraud and a different methodology than generally used in AFDC and Food Stamp quality control reviews, they provide some evidence that the types and frequency of misrepresentation or fraud in cases where illegal aliens receive AFDC benefits for their U.S. citizen children are similar to that of the general AFDC population. Based on a random sample of 450 AFDC cases, a 1997 Orange County study identified potential misrepresentation or fraud in 38 percent of the illegal alien payee cases and over 46 percent of all other cases. These findings of potential fraud were associated with overpaid benefit amounts totaling 9 percent of combined AFDC and Food Stamp benefits paid in a typical month to the 450 cases. Two additional studies based on random samples and conducted in Los Angeles County and Fresno County identified potential misrepresentation or fraud in 42 to 45 percent of the AFDC cases involving illegal alien payees. In these two studies, about one-half of the cases in which misrepresentation or fraud was identified resulted in an overpayment of benefits. In the other cases, the incorrectly reported information did not have an impact on benefit amount. The most commonly cited types of misrepresentation or fraud identified in all three of the California studies were misreported or unreported income and misrepresented household composition, such as unreported members living in a household. The types found in cases involving illegal alien payees did not differ from those of the general AFDC population. Officials in New York and Texas also identified misreporting of income and household composition as the most common types of misreporting among AFDC child-only cases and the general AFDC population. According to one of the California studies, 81 percent of the misreported income cases involved cash obtained by applicants from sources that made verification virtually impossible because there are no records of the financial transactions. This study uses the term “underground economy” to refer to a source of income from which individuals are paid in cash and their earnings are not reported to the Internal Revenue Service or the state. In addition, officials in California, Texas, and New York cited the difficulties of verifying income that individuals—both illegal aliens and citizens—derived from the underground economy. Moreover, because illegal aliens may not legally obtain social security numbers—which serve as the basis for reporting through the Income and Eligibility Verification System—verification of income for this population is difficult. California officials also noted that it is more difficult to obtain evidence of fraud without a social security number. We received comments from the Department of Health and Human Services (HHS) and the Department of Agriculture (USDA). Their comments are included in appendixes II and III, respectively, and technical comments were incorporated as appropriate. HHS stated that our report identifies the difficult and complicated policy issue of providing food and cash assistance to families containing both citizens and illegal immigrants. Yet it also stated that we had not sufficiently emphasized that citizen children of illegal alien parents are legally eligible for benefits on the same basis as any other citizen in need. We believe our report clearly states that these citizen children are eligible for assistance and, while we acknowledge the difficult policy issues involved, this report focuses on describing the extent to which such children receive assistance. USDA commented that the report provides valuable information and emphasized that illegal aliens receive no benefits for themselves and that their income and resources are considered in determining the eligibility of any citizen children. In addition, USDA was concerned that the misrepresentation and fraud rates identified by the California counties’ studies may inadvertently be misinterpreted. It noted that the studies’ definition of misrepresentation and fraud is much broader than that used in Food Stamp quality control studies, which generally focus on the percentage of benefit dollars overpaid as a result of intentional misrepresentation. To address this concern, we have more clearly emphasized the amount of benefit overpayments identified in the studies. We also recognize that the studies use a much broader definition of misrepresentation and fraud than used in quality control reviews and clarified this in the report. We also provided a copy of the report to SSA, which did not have comments. In addition, we considered and incorporated, where appropriate, technical comments from the State of California and Orange County, California. HUD, Los Angeles County, New York, and Texas did not have technical comments. As required by the Illegal Immigration Reform and Immigrant Responsibility Act of 1996, we are sending copies of this report to the Inspector General of the Department of Justice. We are also sending copies to the Secretaries of USDA, HHS, and HUD and the Commissioners of SSA and INS. We will also make copies available to others upon request. Please contact me at (202) 512-7215 if you have any questions concerning this report or need additional information. Major contributors to this report are listed in appendix IV. To estimate the locations, number of households involved, and amount of AFDC and Food Stamp benefits provided to illegal aliens for the use of their U.S. citizen children, we used administrative databases composed of statistically valid samples of households nationwide receiving benefits under each of these programs. The source data were AFDC and Food Stamp households selected for quality control reviews from October 1994 through September 1995—the 1995 federal fiscal year. HHS’ Administration for Children and Families for AFDC and USDA’s Food and Consumer Service for Food Stamps use sample data that are maintained in the National Integrated Quality Control System to estimate state error rates related to eligibility and payment amount and for studies of populations receiving benefits. As part of the quality control reviews done for both the AFDC and Food Stamp programs, the citizenship or immigration status of household members, such as a parent of a U.S. citizen child receiving benefits, is obtained by program officials. To develop our estimates of households in which an illegal alien received benefits on behalf of citizen children under these programs, we selected only sample households identified as having (1) a person acting as the head of household whose citizenship status was listed as illegal alien due to expired visa or illegal entry into the country or (2) a head of household whose spouse had a citizenship status listed as illegal alien due to expired visa or illegal entry into the country. For some individuals, the data did not precisely capture their exact immigration status. For example, citizenship status was listed as “not a U.S. citizen, but exact alien/immigrant status unknown” or “unknown.” As a result, there may be additional households with an illegal alien parent that we were unable to identify and are not included in our estimate. Heads of households or their spouses whose citizenship status was listed as being accorded refugee status, granted a stay of deportation by the INS, or permanently residing in the U.S. under color of law were not included in our estimate. For each of the selected households headed by or whose spouse was an illegal alien, we obtained from the sample case file information on the dollar amount of benefits received by the recipient household for the sample month, projected the yearly dollar amount of such benefits received by the household, and confirmed that the benefits were received on behalf of U.S. citizen children in the household. We applied sample weights to develop our estimate for the nation or a specific state. For those states that had a large enough number of households headed by illegal aliens in the sample, we were able to develop an estimate for that state. For AFDC, we were able to estimate the number of such households and benefits received in Arizona, California, New York, and Texas. Under the Food Stamp program, these states were Arizona, California, and Texas. Although other states, such as Florida and Illinois, also have large illegal alien populations, not enough households with an illegal alien parent or spouse were identified in these states’ samples to allow us to develop estimates. This also occurred for New York in the case of the Food Stamp program. Because our estimates are based on samples, they are subject to sampling error. Table I.1 shows each of our estimates and indicates the extent of each estimate’s sampling error by showing the 95-percent confidence interval around that estimate. There is a 95-percent chance that the actual total falls within that interval. Benefits received (thousands) (+/-1,000) (+/-2,300) (+/-$2,700) (+/-$4,700) (+/-15,500) (+/-23,000) (+/-$93,100) (+/-$44,600) (+/-5,000) (+/-$24,700) (+/-3,500) (+/-12,200) (+/-$4,100) (+/-$36,200) (+/-17,000) (+/-27,000) (+/-$100,000) (+/-$60,000) We discussed and obtained concurrence from personnel of the Administration for Children and Families for AFDC and the contractor for the Food and Consumer Service for Food Stamps regarding our estimating procedures. Because of variances in how SSI cases composed of disabled children with illegal alien payees are identified in SSA’s automated systems, we could not develop an accurate estimate of the number of these cases. However, we statistically sampled available recipient caseload data to estimate a minimum number of disabled child cases in which one or both parents were illegal aliens as of December 1996. Our sample included a sufficient number of cases from California and Texas to allow us to provide estimates for those states. Based on the benefits being provided to the children in our sample, we also estimated the dollar amount of benefits paid to the children in December 1996. Because our figures are based on samples, they are subject to sampling error. Table I.2 shows each of our estimates and indicates the extent of each estimate’s sampling error by showing the 95-percent confidence interval around that estimate. There is a 95-percent chance that the actual total falls within that interval. Cases with an illegal alien payee 2,178 (+/-63) (+/-$54,536) (+/-57) (+/-$35,207) (+/-101) $1,466,601(+/-$74,392) Since AFDC and Food Stamp quality control data are reviewed by the Administration for Children and Families, the Food and Consumer Service, and the states, and SSI data are reviewed by SSA, we did not independently examine the computer controls or verify the accuracy of these data. Except for this limitation, we conducted our review in accordance with generally accepted government auditing standards between December 1996 and July 1997. In addition to those named above, the following individuals also made important contributions to this report: Carlos J. Evora; Andrea H. Ewertsen; Deborah A. Moberly; and John G. Smale, Jr. Undocumented Aliens: Medicaid-Funded Births in California and Texas (GAO/HEHS-97-124R, May 30, 1997). Illegal Aliens: National Net Cost Estimates Vary Widely (GAO/HEHS-95-133, July 25, 1995). Illegal Aliens: Perspectives on the Issues Associated With Illegal Aliens (GAO/T-OGC-94, June 24, 1994). Illegal Aliens: Assessing Estimates of Financial Burden on California (GAO/HEHS-95-22, Nov. 28, 1994). Benefits for Illegal Aliens: Some Program Costs Increasing, But Total Costs Unknown (GAO/T-HRD-93-33, Sept. 29, 1993). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a legislative mandate, GAO provided information on the extent to which means-tested public benefits are provided to illegal aliens for the use of eligible individuals, focusing on: (1) the extent and the locations that selected federal means-tested benefits are being provided to illegal aliens for use by their U.S. citizen children; and (2) the nature and extent of fraud or misrepresentation detected in connection with these benefits. GAO noted that: (1) in fiscal year (FY) 1995, about $1.1 billion in Aid to Families with Dependent Children (AFDC) and Food Stamp benefits were provided to households with an illegal alien parent for the use of his or her citizen child; (2) this amount accounted for about 3 percent of AFDC and 2 percent of Food Stamp benefit costs; (3) a vast majority of households receiving these benefits resided in a few states--85 percent of the AFDC households were in California, New York, Texas, and Arizona; (4) 81 percent of Food Stamp households were in California, Texas, and Arizona; (5) California households alone accounted for $720 million of the combined AFDC and Food Stamp caseloads; (6) although illegal aliens also received Supplemental Security Income (SSI) and Department of Housing and Urban Development housing assistance for their citizen children, data to develop estimates for these two programs were not available; (7) comprehensive national statistics on any misrepresentation or fraud perpetrated by illegal aliens receiving benefits on behalf of their citizen children are not available; (8) a few California counties' studies of AFDC households indicate that the rates and types of potential misrepresentation or fraud are similar both for households headed by illegal aliens and for the general welfare population; (9) in these studies, one of the most commonly cited types of misrepresentation or fraud was the underreporting of income; (10) income is a key factor in determining program eligibility and benefit amounts and, when underreported, can result in overpayment of benefits; and (11) the states visited by GAO had procedures in place to verify income, but officials said that verifying individuals' income from earnings obtained through the underground economy was very difficult--for both illegal aliens and for citizens--in part because these earnings are not documented or reported to state or federal databases used to verify employment or earnings.
The decennial census is conducted against a backdrop of immutable deadlines. The census’s elaborate chain of interrelated pre- and post- Census Day activities is predicated upon those dates. To meet these mandated reporting requirements, census activities must occur at specific times and in the proper sequence. The Secretary of Commerce is legally required to (1) conduct the census on April 1 of the decennial year, (2) report the state population counts to the President for purposes of congressional apportionment by December 31 of the decennial year, and (3) send population tabulations to the states for purposes of redistricting no later than 1 year after the April 1 census date. For the decennial census, the vast majority of housing units will receive paper, mailback census questionnaires delivered by mail or by census field workers before April 1, 2010. This requires a complete and accurate address list. The inventory of housing units is obtained from several sources including files from the U.S. Postal Service, partnerships established with local entities, and the Bureau’s address canvassing— where temporary field workers verify and identify the addresses of an estimated 130 million housing units over the course of about 6 weeks in 2009. When housing units do not respond to questionnaires by a certain deadline, temporary field workers will follow up and collect census data through personal interviews during the nonresponse follow-up operation, which accounts for the largest single component of the field data collection workload and budget. The Bureau estimates that nonresponse follow-up will include an estimated 39 million housing units over the course of 12 weeks in 2010. The Bureau also relies on special procedures to handle areas or living quarters that are not suitable for mailing or delivering census questionnaires, such as very remote areas in Alaska and prisons. To gather census data, the Bureau opens temporary offices across the country for approximately 2 years, and all field staff employed in these offices are considered temporary, with jobs lasting as long as the entire 2- year period or as short as a few weeks, depending on the specific operation for which they are employed. For example, one could work on address canvassing, an early operation, and then be rehired again to work on the nonresponse follow-up operation later on in the decennial. To conduct its decennial activities, the Bureau recruits, hires, and trains temporary field workers based out of local census offices nationwide. During Census 2000, the Bureau hired about half a million temporary workers at peak, which temporarily made it one of the nation’s largest employers, surpassed by only a handful of big organizations, such as Wal- Mart and the U.S. Postal Service. For the 2010 Census, the Bureau expects to hire almost 75,000 temporary field workers—at a cost of over $350 million—during address canvassing in 2009 and almost 525,000 temporary field workers—at a cost of over $2 billion—for nonresponse follow-up in 2010. (See fig. 1.) High-performance organizations are inclusive, drawing on the strengths of employees at all levels and of all backgrounds. For the decennial census, having a diverse workforce is particularly important. For example, in its strategic plan, the Bureau notes that as the nation becomes more diverse, the Bureau’s staff must reflect the increasing diversity of the American population if they are to do their job well. In a related point, Bureau officials emphasize the need to recruit temporary field workers locally, because such staff are best able to relate to local residents and overcome any reluctance to participate in the census. In fact, the census, in many respects, is a local endeavor because the key ingredients of a successful population count, such as a complete and accurate address list and timely and accurate field data collection, are carried out by the locally recruited temporary field staff—working in and around their respective neighborhoods—collecting data through various operations. A high-performance organization relies on a dynamic workforce with the requisite talents, multidisciplinary knowledge, and up-to-date skills to ensure it can accomplish its goals and missions. As we have previously reported, such an organization fosters a work environment in which people are enabled and motivated to contribute to continuous learning and improvement as well as to accomplishing missions and goals. Such organizations promote accountability and fairness. Importantly, they take advantage of a workforce that is inclusive and utilizes the strengths and talents of employees at all levels and backgrounds. This work environment is consistent with the principles of “diversity management”—a process intended to create and maintain a positive work environment where individual similarities and differences are valued, so that all can reach their potential and maximize their contributions to the organization. As shown in table 1, in our previous work on diversity management, we identified 9 diversity management practices. Perhaps the most important practice for diversity management is top leadership commitment, because leaders and managers must commit the time and necessary resources for the success of an organization’s diversity initiatives. Although all of these practices are important, today we discuss two of them as they relate to the Bureau: (1) succession planning—an ongoing, strategic process for identifying and developing a diverse pool of talent for an organizations’ potential future leaders—and (2) recruitment for the Bureau’s temporary field work—the process of attracting qualified, diverse applicants for employment which is important for maintaining high performance. As we have testified earlier, the federal government is facing new and more complex challenges in the 21st century because of long-term fiscal constraints, changing demographics, and other factors. The federal Senior Executive Service (SES), which generally represents the most experienced and senior segment of the federal workforce, is critical to providing the strategic leadership needed to effectively meet these challenges. Governmentwide, SES retirement eligibility is much higher than the workforce in general, and a significant number of SES retirements could result in a loss of leadership continuity, institutional knowledge, and expertise among the SES corps. We have previously reported that the Bureau needs to strategically manage its human capital to meet future requirements. For example, three senior census executives left the Bureau after the 2000 Census; in the years ahead, other key employees will become eligible for retirement. According to the Bureau’s strategic plan, about 45 percent of the Bureau's current permanent employees will be eligible for regular or early retirement by 2010. Thus, human capital is a key planning area for ensuring that the Bureau has the skill mix necessary to meet its future staffing requirements. Racial, ethnic, and gender diversity in the federal government’s senior ranks can be a key organizational component for executing agency missions, ensuring accountability to the American people in the administration and operation of federal programs, and achieving results. Based on previous work identifying diversity in the federal SES corps, we compared diversity at the Bureau’s senior levels with that of the Department of Commerce and the executive branch governmentwide. Also, because the vast majority of SES personnel is drawn from an agency’s pool of GS-14s and GS-15s, we also compared the diversity of the Bureau’s SES developmental pool with that of the Department of Commerce and other executive branch agencies governmentwide. (See table 2.) Overall, we found that the Bureau’s leadership ranks are about as diverse as the leadership ranks for the federal government as a whole, with higher minority representation and lower representation of women. Diversity in the federal government’s senior leadership and developmental pools are important to developing and maintaining a high-quality and inclusive workforce. Succession planning also is tied to the federal government’s opportunity to change the diversity of the SES corps through new appointments. The success of the 2010 Census depends, in part, upon the Bureau’s ability to recruit, hire, and train a very large temporary workforce that works for a very short period. Over the next several years the Bureau plans to recruit 3.8 million applicants and hire nearly 600,000 temporary field staff from that applicant pool for two key operations: address canvassing and nonresponse follow-up. For the 2010 Census the Bureau plans to use a recruiting and hiring approach like the one it used in 2000. For the 2000 Census, the Bureau used an aggressive recruitment strategy in partnership with state, local, and tribal governments, community groups, and other organizations to help recruit employees and obtained exemptions from the majority of state governments so that individuals receiving Temporary Assistance to Needy Families, Medicaid, and selected other types of public assistance would not have their benefits reduced when earning census income, thus making census jobs more attractive. Further, the Bureau used a recruitment advertising campaign, totaling over $2.3 million, which variously emphasized the ability to earn good pay, work flexible hours, learn new skills, and do something important for one’s community. Moreover, the advertisements were in a variety of languages to attract different ethnic groups, and were also targeted to different races, senior citizens, retirees, and people seeking part-time employment. The Bureau also advertised using traditional outlets such as newspaper classified sections, as well as more novel media including Internet banners and messages on utility and credit card bills. Through its local census offices, the Bureau plans to recruit, hire, and deploy a diverse workforce that looks like and can relate to the people being counted. Local census offices will open for the 2010 Census in October 2008. The Bureau has developed a Planning Database that local and regional offices use to prepare recruiting plans. The Bureau expects those offices to use the database to identify areas where field staff are more difficult to recruit and other areas where certain skills—such as foreign language abilities—are needed. The Bureau will update the Planning Database for every census tract in the United States for the 2010 Census, using many variables from Census 2000. These variables include: Census 2000 mail return rates; household size; median household income; percentage of persons living in poverty; number of single person households; highest level of education achieved; percentage of linguistically isolated households (i.e., where no person 14 or over speaks English at least “very well”); and percentage of persons on public assistance. One of the Bureau’s approaches to recruiting and hiring is ensuring that it recruits and hires a sufficient number of field staff. For the 2000 Census the Bureau recruited 5 times the number of persons that it hired, and hired twice the number of persons that it expects to need. We recommended that the Bureau consider a more targeted approach. For example, the Bureau could analyze the factors, such as education and work status, for employees more likely to be successful at census work and less likely to leave during an operation. The Bureau questioned the need for taking action, noting that its priority is to reach out as broadly as possible to the diverse communities in the country, because in order to have hundreds of thousands of temporary workers, it must attract several million applicants. We agree that the Bureau’s recruiting approach should be designed to ensure it selects a sufficient number of persons to complete the census; however, we do not believe the Bureau has identified the factors most likely to predict applicants’ success and that are incorporated in selection tools and procedures. Our recommendation calls for a fact-based approach to developing selection tools so that the Bureau could target recruitment to applicants who are not only more likely to perform well but also to continue throughout an operation. Recruiting such applicants could help reduce operational costs as well as recruiting and hiring expenditures by decreasing the need to recruit and hire additional workers. Likewise, such an approach can be undertaken while continuing to attract a diverse workforce. Collaboration can be broadly defined as any joint activity that is intended to produce more public value than could be produced when the organization acts alone. We have previously reported on several best practices that can enhance and sustain collaborative efforts. These include (1) establishing mutually reinforcing or joint strategies and (2) identifying and addressing needs by leveraging resources. For example, critical decennial tasks, such as building public awareness of the census, motivating people to respond, and locating pockets of hard-to-count population groups, are accomplished in large part by partnerships between the Bureau and local governments and community groups. To leverage visibility, the Bureau also used partnerships with national organizations such as the Mexican American Legal Defense and Education Fund, the National Association for the Advancement of Colored People, the National Congress of American Indians, and the American Association of Retired Persons. In a recent field hearing, held by this subcommittee in San Antonio, Texas on July 9, 2007, leaders of several national organizations called on the Bureau to continue its efforts to ameliorate factors such as apathy, fear, and distrust of government through continued partnerships for the 2010 Census. Leaders noted that within historically hard-to-enumerate communities these issues are best addressed by trusted individuals, institutions, and organizations. Consequently, these organizations’ leaders believe that the significance and positive impact of partner and stakeholder networks to a community mobilization effort is critical to a region’s success and to the overall success of the census. The Bureau also has met periodically with advisory committees representing minority populations to help ensure a complete and accurate census. To take a more complete and accurate count of the nation’s population in Census 2000, the Bureau partnered with other federal agencies, as well as with state, local and tribal governments; religious, community, and social service organizations; and private businesses. In previous work we found that to coordinate local partners’ efforts, the Bureau encouraged government entities to form Complete Count Committees, which were to be made up of representatives from various local groups. According to the Bureau, about 140,000 organizations participated in its partnership program, assisting in such critical activities as reviewing and updating the Bureau’s address list; encouraging people—especially hard-to-count populations—to participate in the census; and recruiting temporary census workers. The program stemmed from the Bureau’s recognition that a successful head count required the local knowledge, experience, and expertise that these organizations provide. While we concluded that it is quite likely that the key census-taking activities, such as recruiting temporary census workers and encouraging people to complete their questionnaires would have been less successful had it not been for the Bureau’s aggressive partnership efforts, we also recommended that the Bureau take steps to make the partnership program more accountable and performance-oriented. The Bureau expects the program will play a key role in the 2010 Census. However, the Bureau’s fiscal year 2008 budget request does not include funds for the regional partnership program. In contrast the Bureau received $5.7 million for the regional partnership program in 1998. One of the means by which the Bureau plans for increasing response rates is an advertising and outreach campaign to promote the census. In Census 2000, the Bureau first used a paid advertising campaign to create and produce an advertising campaign to inform and motivate the public to complete and return the census form by using a variety of media to stress the message that participating in the census benefits one’s community. For Census 2000, the Bureau spent about $167 million on the paid advertising campaign and a substantial portion of the advertising was directed at minority groups. For the 2010 Census, the Bureau is currently in the process of considering proposals for a similar effort. In its Request for Proposals, the Bureau required that the contractor establish goals for subcontracting with firms that are, for example, small disadvantaged businesses, women-owned, veteran-owned, or are Historically Underutilized Business Zone companies. The Bureau also included in the solicitation a requirement that the contractor have expertise and experience in marketing to historically undercounted populations, such as African Americans, Asians, Hispanics, American Indian and Alaska Natives, Native Hawaiians, and Pacific Islanders. The Bureau expects to award this communication campaign contract in September 2007. For the 2010 Census, the Bureau will continue a program first implemented for Census 2000 in which it partners with local, state, and tribal governments. The program, the Local Update of Census Addresses (LUCA) allows participants to contribute to complete enumeration of their jurisdictions by reviewing, commenting on, and providing updated information on the list of addresses and maps that the Bureau will use to deliver questionnaires within those communities. The Bureau has taken steps to improve LUCA for 2010. For example, to reduce participant workload and burden, the Bureau will provide a longer period for reviewing and updating LUCA materials—from 90 to 120 days. However, we recently testified before this subcommittee that the Bureau could do more to mitigate possible difficulties that participants may have with the new LUCA software and training and to help participants convert Bureau- provided address files into their own software format. For the 2010 Census, the Bureau is making the most extensive use of contractors in its history, turning to the private sector to supply a number of different mission-critical functions and technologies. In awarding and administering its contracts related to the 2010 Census, the Bureau will need to be mindful of its obligations to promote contracting opportunities for various categories of contractors, such as small businesses, women- owned businesses, small disadvantaged businesses, and others. In this regard, the Small Business Act contains an annual governmentwide goal for small business participation of not less than 23 percent of the total value of all prime contract awards. To achieve this governmentwide goal, the Small Business Administration negotiates annual small business contracting goals with each federal executive agency. For the Department of Commerce, the contracting goals are summarized in table 3. In terms of subcontracting, any business that receives a contract directly from a federal executive agency for more than $100,000 must agree to give small businesses the “maximum practicable opportunity to participate in the contract consistent with its efficient performance.” Additionally, for contracts that are generally anticipated to have a $550,000 threshold and have subcontracting possibilities, the prime contractor is required to have an established subcontracting plan, which promotes and supports small business development. For example, the solicitation for the advertising and outreach campaign requires that the contractor establish and adhere to a subcontracting plan that provides maximum practicable opportunity for small business participation in performing the contract. Contractors that do not meet subcontracting goals may face damages if the agency’s contracting officer determines that a contractor did not make a good-faith effort to comply with the subcontracting plan. Mr. Chairman, as we have recently testified, the Bureau faces challenges to successfully implementing the 2010 Census including those of a demographic and socioeconomic nature due to the nation’s increasing diversity in language, ethnicity, households, and housing types, as well as a reluctance in the population to participate in the census. In fact, the Bureau recognizes that hiring a diverse workforce—especially a temporary field workforce—that is like the people that are being enumerated is one way of eliciting the cooperation of those being counted. The involvement of such a workforce in the key nonresponse follow-up activity can help to increase productivity and contain enumeration costs. Our review of data pertaining to the racial, ethnic, and gender composition of the Bureau’s upper-level management as well as the grades of those most likely to rise to that level of management shows that, the Bureau’s leadership ranks are generally as diverse as the federal government as a whole. Moreover, the Bureau’s strategy of recruiting temporary field staff locally is an important way of promoting a diverse field workforce that is like those being enumerated. In addition, the Bureau’s outreach and partnership programs can be an important way of eliciting the participation of communities that are often said to be undercounted or may be reluctant to participate in the decennial census. As in 2000, for 2010 the Bureau intends to use an integrated communications strategy, including advertising, that is carried out by contractors and subcontractors that have the expertise and experiences in marketing to historically undercounted populations. It will be important for the Bureau to build on its efforts to ensure an accurate and cost-effective census by maximizing the potential offered by a diverse workforce and by ensuring that its contractors perform as promised. We stand ready to assist this subcommittee in its oversight efforts. This concludes my remarks. I will be glad to answer any questions that you, Mr. Chairman, Mr. Turner, or other subcommittee Members may have. For further information regarding this statement, please contact Mathew Scirè, Director, Strategic Issues on (202) 512-6806 or at sciremj@gao.gov. Individuals making key contributions to this statement included Betty Clark, Elizabeth Fan, Carlos Hazera, Belva Martin, Lisa Pearson, Rebecca Shea, Cheri Truett, Kiki Theodoropoulos, and William Woods. 2010 Census: Preparations for the 2010 Census Underway, but Continued Oversight and Risk Management Are Critical. GAO-07-1106T Washington, D.C: July 17, 2007. 2010 Census: Census Bureau Is Making Progress on the Local Update of Census Addresses Program, but Improvements Are Needed. GAO-07- 1063T. Washington, D.C.: June 26, 2007. 2010 Census: Census Bureau Has Improved the Local Update of Census Addresses Program, but Challenges Remain. GAO-07-736. Washington, D.C.: June 14, 2007. Human Capital: Diversity in the Federal SES and the Senior Levels of the U.S. Postal Service. GAO-07-838T. Washington, D.C.: May 10, 2007. 2010 Census: Census Bureau Should Refine Recruiting and Hiring Efforts and Enhance Training of Temporary Field Staff. GAO-07-361. Washington, D.C.: April 27, 2007. 2010 Census: Design Shows Progress, but Managing Technology Acquisitions, Temporary Field Staff, and Gulf Region Enumeration Require Attention. GAO-07-779T. Washington, D.C.: April 24, 2007. 2010 Census: Redesigned Approach Holds Promise, but Census Bureau Needs to Annually Develop and Provide a Comprehensive Project Plan to Monitor Costs. GAO-06-1009T. Washington, D.C.: July 27, 2006. 2010 Census: Census Bureau Needs to Take Prompt Actions to Resolve Long-standing and Emerging Address and Mapping Challenges. GAO-06- 272. Washington, D.C.: June 15, 2006. 2010 Census: Costs and Risks Must be Closely Monitored and Evaluated with Mitigation Plans in Place. GAO-06-822T. Washington, D.C.: June 6, 2006. 2010 Census: Census Bureau Generally Follows Selected Leading Acquisition Planning Practices, but Continued Management Attention Is Needed to Help Ensure Success. GAO-06-277. Washington, D.C.: May 18, 2006. Census Bureau: Important Activities for Improving Management of Key 2010 Decennial Acquisitions Remain to be Done. GAO-06-444T. Washington, D.C.: March 1, 2006. 2010 Census: Planning and Testing Activities Are Making Progress. GAO-06-465T. Washington, D.C.: March 1, 2006. Results Oriented-Government: Practices That Can Help Enhance and Sustain Collaboration among Federal Agencies. GAO-06-15. Washington, D.C.: October 21, 2005. Information Technology Management: Census Bureau Has Implemented Many Key Practices, but Additional Actions Are Needed. GAO-05-661. Washington, D.C.: June 16, 2005. Diversity Management: Expert-Identified Leading Practices and Agency Examples. GAO-05-90. Washington, D.C.: January 14, 2005. 2010 Census: Basic Design Has Potential, but Remaining Challenges Need Prompt Resolution. GAO-05-9. Washington, D.C.: January 12, 2005. Data Quality: Census Bureau Needs to Accelerate Efforts to Develop and Implement Data Quality Review Standards. GAO-05-86. Washington, D.C.: November 17, 2004. Census 2000: Design Choices Contributed to Inaccuracy of Coverage Evaluation Estimates. GAO-05-71. Washington, D.C.: November 12, 2004. American Community Survey: Key Unresolved Issues. GAO-05-82. Washington, D.C.: October 8, 2004. 2010 Census: Counting Americans Overseas as Part of the Decennial Census Would Not Be Cost-Effective. GAO-04-898. Washington, D.C.: August 19, 2004. 2010 Census: Overseas Enumeration Test Raises Need for Clear Policy Direction. GAO-04-470. Washington, D.C.: May 21, 2004. 2010 Census: Cost and Design Issues Need to Be Addressed Soon. GAO- 04-37. Washington, D.C.: January 15, 2004. Decennial Census: Lessons Learned for Locating and Counting Migrant and Seasonal Farm Workers. GAO-03-605. Washington, D.C.: July 3, 2003. Decennial Census: Methods for Collecting and Reporting Hispanic Subgroup Data Need Refinement. GAO-03-228. Washington, D.C.: January 17, 2003. Decennial Census: Methods for Collecting and Reporting Data on the Homeless and Others without Conventional Housing Need Refinement. GAO-03-227. Washington, D.C.: January 17, 2003. 2000 Census: Lessons Learned for Planning a More Cost-Effective 2010 Census. GAO-03-40. Washington, D.C.: October 31, 2002. The American Community Survey: Accuracy and Timeliness Issues. GAO-02-956R. Washington, D.C.: September 30, 2002. 2000 Census: Review of Partnership Program Highlights Best Practices for Future Operations. GAO-01-579. Washington, D.C.: August 20, 2001. 2000 Census: Answers to Hearing Questions on the Status of Key Operations. GGD-00-109R, Washington, D.C.: May 31, 2000. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
For the 2010 Census, the U.S. Census Bureau (Bureau) faces the daunting challenge of cost-effectively counting a population that is growing steadily larger, more diverse, increasingly difficult to find, and more reluctant to participate in the decennial census. Managing its human capital, maintaining community partnerships, and developing advertising strategies to increase response rates for the decennial census are several ways that the Bureau can complete the 2010 Census accurately and within budget. This testimony, based primarily on past GAO work, provides information on (1) diversity in the Bureau's workforce, (2) plans for partnering with others in an effort to build public awareness of the census; and (3) certain requirements for ensuring contracting opportunities for small businesses. Diversity in senior leadership is important for effective government operations. GAO found that the racial, ethnic, and gender makeup of the Bureau's senior management and staff in grades most likely to rise to senior management is generally in line with that of the federal government as a whole. The success of the 2010 Census depends, in part, upon the Bureau's ability to recruit, hire, and train a temporary workforce reaching almost 600,000. In 2000, the Bureau used an aggressive recruitment strategy, including advertising in various languages to attract different ethnic groups and races, as well as senior citizens, retirees, and others seeking part-time employment. The Bureau intends to use a similar recruitment strategy for the 2010 Census. For 2010, the Bureau also intends to involve community and other groups to encourage participation in the census, particularly among certain populations, such as persons with limited English proficiency and minorities. Further, the Bureau plans to hire a contractor to develop an advertising campaign to reach undercounted populations. In its contract solicitation, the Bureau has included a requirement that the contractor establish goals for subcontracting with, amongst other groups, women-owned and small disadvantaged businesses, and a requirement that the contractor have experience in marketing to historically undercounted populations such as African Americans, Asians, Hispanics, American Indian and Alaska Natives, Native Hawaiians, and Pacific Islanders. This contract is expected to be awarded in September 2007. For the Bureau to leverage the benefit of its diversity and outreach efforts, it will be important for it to follow through on its intentions to recruit a diverse workforce, and utilize the experience of a diverse pool of partners, including community groups, state and local governments, and the private sector.
Over the years, FAA has taken a number of steps to better manage its ATC modernization program. In 1995, based on the premise that FAA would be better able to manage ATC modernization if it were not constrained by federal acquisition laws, FAA requested and Congress enacted legislation that exempted the agency from most federal procurement laws and regulations and directed FAA to develop and implement a new acquisition management system that would address the unique needs of the agency. In 1996, FAA implemented its Acquisition Management System, which provides high-level acquisition policy and guidance for selecting and controlling ATC system acquisitions through all phases of the acquisition life cycle. In February 2004, FAA created the performance-based ATO to control and improve FAA’s investments and operations. ATO incorporated FAA’s former Research and Acquisitions and Air Traffic Services organizations—essentially those that develop and acquire systems and those that operate them—into a single organization. ATO catalogues its acquisition programs in its Capital Investment Plan (CIP). The fiscal year 2006 CIP contained 120 funded acquisition programs and their anticipated total budgets. The 120 acquisition programs include 37 that have had acquisition program baselines approved by FAA’s Joint Resources Council (JRC). These baselined programs include communications, navigation, and surveillance systems that are key to air traffic control operations. Acquisition program baselines show, among other things, executive agreement on an acquisition’s estimated budget and schedule. The JRC also approves rebaselining, through which the agency documents and approves changes to a program’s budget and schedule. Of the 37 baselined programs, 29 also have an exhibit 300, a document prepared for the Office of Management and Budget (OMB) that provides investment justification and management plans for major ATC acquisitions. OMB requires agencies to submit exhibit 300s for major investments, such as those that require special management attention due to the investment’s importance to the agency, have significant program or policy implications, or that the agency defines as major. Figure 1 provides a breakout of the programs in the fiscal year 2006 CIP. The 83 programs that are not baselined include facilities and infrastructure programs; a variety of mission support programs such as training, information technology services, contractual services, and ancillary systems; and systems that are commercially available and ready for ATO to use without modification. FAA’s annual goals and measures for performance reporting, including the two for performance in managing acquisitions, are described in the agency’s annual strategic plan, known as the FAA Flight Plan. The Flight Plan, which FAA began publishing in 2004, sets forth goals, objectives, strategies, initiatives, and specific performance targets for the agency. The Government Performance and Results Act of 1993 requires each agency to prepare and submit to the President and Congress annual reports on program performance for the previous fiscal year. As an administration within the Department of Transportation, FAA is not required to prepare a separate report, but has elected to do so following the statutory framework and guidance for federal agencies. Each year, FAA reports its level of success in meeting its two acquisition performance targets, as well as its other performance targets, in its Performance and Accountability Report. The full suite of FAA’s performance measures for fiscal year 2006 is listed in appendix III. In our past work, we identified nine key attributes of successful performance measures used to evaluate agencies’ performance goals and measures. We determined that eight of these were applicable to our study of ATO’s on-budget and on-schedule acquisition goals and performance measures. See appendix I for more information on our methods. The eight key attributes to which we compared ATO’s acquisition performance measures are as follows: 1. Linkage. Measure is aligned with division- and agencywide goals and mission and clearly communicated throughout the organization. 2. Measurable target. Measure has a numerical goal. 3. Limited overlap. Measure provides new information beyond that provided by other measures. 4. Governmentwide priorities. Each measure covers a priority such as quality, timeliness, and cost of service. 5. Objectivity. Measure is reasonably free from significant bias or manipulation. 6. Reliability. Measure produces the same result under similar conditions. 7. Core program activities. Measure covers the activities that an entity is expected to perform to support the intent of the program. 8. Clarity. Measure is clearly stated and the name and definition are consistent with the methodology used to calculate it. ATO developed its performance targets to be consistent with targets set in the Department of Transportation’s strategic plan, OMB guidance, and the Federal Acquisition Streamlining Act of 1994, which call for other federal agencies to establish cost and schedule goals for acquisitions and to achieve at least 90 percent of those goals. In FAA’s latest Flight Plan, covering fiscal years 2008 through 2012, the performance targets for acquisitions are stated as follows: In fiscal year 2008, 90 percent of major system acquisition investments are within 10 percent of annual budget and maintain through fiscal year 2012. In fiscal year 2008, 90 percent of major system acquisition investments are on schedule and maintain through fiscal year 2012. FAA began using the basic structure of its current annual acquisition performance measures in 2003 when FAA sought to achieve 80 percent of designated milestones and maintain 80 percent of critical program costs within 10 percent of the total budget as published in the CIP. In 2005, ATO split the measure into separate targets for budget and schedule. ATO’s acquisitions targets gradually increased to 90 percent within 10 percent of budget and 90 percent on schedule by fiscal year 2008. Although ATO measures performance of all acquisitions, it reports performance only on major acquisitions in its annual Performance and Accountability Report. At the beginning of each fiscal year, ATO managers identify major acquisitions for performance reporting. According to ATO officials, their selections are based on a number of program characteristics; key among these is an acquisition’s criticality to the NAS. ATO officials told us that judging a program as critical could be based on a number of factors, such as the program having an OMB exhibit 300 or a baseline document. In addition, officials stated that the selected acquisitions were meant to represent a cross section of ATC system acquisitions within ATO. From fiscal years 2003 through 2006, the number of acquisition programs selected for annual performance reporting varied between 29 and 42, or roughly a quarter of ATO’s total acquisitions portfolio each year. ATO measures budget and schedule performance against selected acquisitions’ most recently approved estimates. To measure on-budget performance, ATO compares the total amount budgeted for each selected acquisition (i.e., the estimated budget to complete an acquisition) reported in the January CIP with the corresponding amount reported in the August CIP. ATO considers any program with budget growth of more than 10 percent in this 8-month time frame as not on budget. At the end of the fiscal year, ATO divides the number of selected acquisitions that are considered on-budget by the total number of selected acquisitions and then reports the result as the percentage of major acquisitions within 10 percent of annual budget. To measure schedule performance, at the start of each fiscal year ATO managers select a minimum of two schedule milestones from each major acquisition selected for performance reporting that year. Milestones could be dates to complete activities such as a final installation or establishing a procurement plan. At the end of the fiscal year, ATO divides the number of selected milestones that were met by the total number of selected milestones. ATO reports the result as the percentage of major programs that are on schedule. ATO’s acquisition performance measures meet four of eight key attributes for successful performance measures that we have identified, but the lack of objective criteria for selecting programs for performance measurement reduces objectivity, reliability, and assurance that core programs are included; the clarity of the performance measures could also be improved. Taken together, the lack of successful attributes, combined with the 1-year focus of the performance measures, may not provide a valid measure of ATO’s acquisition performance. We determined that eight key attributes of successful performance measures were applicable to our study of ATO’s two acquisition performance measures. Our analysis determined that ATO met half of these key attributes, as detailed in table 1. Although ATO provides some general guidance for selecting acquisitions for performance reporting, executive judgment was the primary basis for ATO’s selections. ATO described the scope of its performance measure for managing acquisitions in its Portfolio of Goals for fiscal year 2006 as follows: “FAA’s Air Traffic Organization (ATO) Service Units select specific programs that are determined to provide a capital asset to the NAS. For FY06, 31 acquisition programs will be tracked and monitored. Most of the programs selected are considered ‘major’ and must submit an exhibit 300. Those that do not provide exhibit 300s are included because they contribute an asset to the NAS with a useful life of more than two years. The designation of ‘critical acquisition programs’ in the title of this performance target expresses the critical value of the program to the NAS.” As this description illustrates, ATO’s guidance in designating programs as major allows for a significant amount of professional judgment and does not clearly define which programs should be included or excluded for performance reporting. Figure 2 illustrates the types of acquisitions that ATO selected for performance reporting in fiscal year 2006. ATO’s guidance lacks objectivity in that it does not indicate specifically what is to be observed and in which population or conditions. Contrary to ATO’s statement in the Portfolio of Goals that most of the programs selected are considered major and must have an OMB exhibit 300 prepared, only 14 of the 29 programs selected that year actually had an OMB exhibit 300. Additionally, about half of the programs with an OMB exhibit 300 were not selected for performance reporting. For example, the System Approach for Safety Oversight program and the Aviation Safety Knowledge Management Environment program each have an OMB exhibit 300, but, according to ATO officials, these programs are considered “non- NAS” and are therefore not selected for performance reporting. Likewise, the Facilities Security Risk Management program has an approved baseline but is not selected. ATO officials told us that facilities and mission support programs such as this are generally not selected for acquisition performance reporting; however, ATO has no written guidance on this policy, and has included mission support programs in its performance reporting in the past. Between 2003 and 2006, the number of major acquisitions selected for performance reporting varied from 29 to 42, and represented about a quarter of all acquisitions each year. ATO officials told us that the variation in the number of programs selected from year to year occurred because of decisions to report performance on specific acquisitions, or changes in acquisitions’ status from year to year, such as the introduction of new acquisitions, the completion or cancellation of acquisitions, or lapses in funding for a fiscal year. Although ATO reports performance on about 25 percent of the acquisitions portfolio, ATO officials pointed out that the selected acquisitions have represented between 76 and 84 percent of the value of that portfolio. Nevertheless, because ATO has no objective criteria for designating major programs, its performance measure is vulnerable to bias and the possibility that important or troubled programs could be excluded from performance reporting. Objective performance measures should not allow subjective considerations or judgments to dominate the outcome of the measurement. Objectivity is important in selecting acquisitions for performance reporting. OMB guidance allows an agency to define major investment in its capital planning and investment control process. For example, the Department of Defense defines a major acquisition program as one requiring an estimated total expenditure for research, development, testing, and evaluation of more than $365 million, or an estimated total procurement expenditure of more than $2.2 billion in fiscal year 2000. However, the department also allows for professional judgment as the Secretary of Defense can designate as major any programs below the dollar thresholds, but determined to be important. In addition to selecting major programs for performance reporting, ATO managers select two or more schedule milestones from each selected program, and use these to measure schedule performance. ATO’s guidance allows managers wide latitude to select milestones for performance measurement and provides no guidance regarding the significance of the milestones that managers should select. Because this provides managers the opportunity to exclude milestones that they do not expect to meet during the coming fiscal year, it further weakens the objectivity of the measure. Four of our five experts commented that ATO’s lack of criteria for selecting milestones or the ability to pick and choose milestones for performance measurement was a shortcoming of ATO’s performance measurement. The lack of objective criteria for designating major programs also impairs the key attribute of reliability and the assurance that the measures include core program activities. Performance measures possess the key attribute of reliability when they produce the same results each time they are applied under the same conditions. We have reported that judgmental evaluations can impair reliability and introduce inconsistencies, which can affect the outcome of performance measurement. With executive judgment serving as the primary determinant of which programs and milestones are selected and measured, different managers could select different programs each year, resulting in different performance results. Likewise, the lack of objective criteria does not ensure that ATO managers include all core program activities in performance measurement each year. We found that ATO eliminated some acquisitions from performance reporting for 1 or more years and then resumed reporting these acquisitions in a subsequent year, although ATO provided reasonable explanations for these occurrences. Nevertheless, reliability and the inclusion of core program activities would be better assured if executive judgment was grounded in written and objective criteria. Four of the five experts who advised us during our review agreed that ATO needs to improve its criteria for determining which programs are major and consequently are included in performance reporting. ATO’s acquisition performance measures also lack a fourth key attribute of successful performance measures: clarity. A performance measure that is not clearly stated (i.e., contains extraneous or omits key data elements) or that has a name or definition that is inconsistent with the way it is calculated can confuse users and could cause managers or other stakeholders to think that performance was better or worse than it actually was. ATO’s on-budget acquisition performance measure lacks clarity in reporting because ATO does not indicate that the acquisition performance of baselined programs is measured using the most recently approved budget estimates, as reflected in the January CIP. While ATO does present Congress with valuable information about a program’s most recent budget performance by comparing the August budget estimate against the January budget estimate, this provides only one perspective on performance because rebaselining resets the measurement of budget or schedule variances to zero. Of the 31 baselined programs that ATO selected for acquisition performance reporting in 1 or more years from fiscal year 2003 through fiscal year 2006, the agency has rebaselined 18, and has rebaselined some of these more than once. (See app. II.) For example, the Standard Terminal Automation Replacement System (STARS) was originally budgeted at $940.2 million, but has been rebaselined twice and is now budgeted at almost $2.8 billion. However, because ATO measures budget performance for an 8-month timeframe against the most recently approved budget estimate, STARS was considered on budget for fiscal years 2003 through 2006. Other acquisitions had exceeded original budgets by between $9 million and $159 million by March 2007, the date of the most recently rebaselined acquisition. This information is not disclosed in ATO’s performance reporting. (See app. II.) ATO officials emphasized that they measure and report on annual performance and stated they do not measure performance against original baselines because the agency has already determined that the original baselines cannot be met. One of the experts who advised us pointed out that it serves no purpose to continually call a program over budget and behind schedule if it has been successfully managed for a significant period of time. We agree that when original baselines cannot be met, rebaselining can be appropriate. We also agree that measuring annual progress against the current program baseline has some value, but using annual measurement as the sole basis for acquisitions that have been rebaselined does not provide a complete picture of performance over time. Four of our five experts commented that disclosure of rebaselining was important in some form, and suggestions ranged from disclosing rebaselining in a footnote to clearly reporting all rebaselining. The absence of this information on rebaselining in ATO’s performance reporting could cause managers and other stakeholders, including Congress, to think that performance was better than it actually was. ATO reports meeting its schedule performance goal when at least a specified percentage “of major system acquisition investments are on schedule….” This same wording is how the goal is reported in FAA’s 2006 Performance and Accountability Report, where the agency noted on- schedule performance of 97.44 percent. However, ATO is actually basing its schedule performance measurement on two or more schedule milestones within a selected program. Thus, the wording of the target and the performance reporting gives the misleading impression that the entire acquisition is on schedule when the reported performance is based only on selected milestones. For example, ATO reported that the $286 million Integrated Terminal Weather System (ITWS) acquisition was on schedule in fiscal year 2006 because it hit its selected milestones for that year. However, the ITWS acquisition (which began in 1997) has encountered funding reductions, requirements growth and unplanned work, and greater-than-expected software complexity. ITWS is now scheduled for completion in October 2009. ATO’s annual reporting based on milestones simply notes ITWS as on-schedule and does not make clear that the program was originally scheduled for completion 6 years earlier, in July 2003. Because ATO’s performance measures lack several attributes of successful performance measures, and are focused on 1-year snapshots of performance, they may not provide a valid assessment of acquisition performance over time. A valid measure provides an accurate representation of what is being measured. Many of ATO’s acquisitions span several years and, as the next section shows, measuring performance against original baselines provides a different perspective on acquisition performance than that reported by ATO over the past 4 years. When measured against original baselines, ATO shows improvement in its managing of acquisitions, but its performance is lower than indicated in FAA’s annual Performance and Accountability Report. The lack of original baseline information in ATO’s performance reporting could provide Congress and the American people with the impression that the transition to NextGen is progressing more smoothly than might actually be the case. Comparing the current status of ATO’s major ATC system acquisitions (i.e., those that ATO selected each year for performance reporting) with the budgets and schedules in these acquisitions’ original baselines yields lower performance results than those reported to Congress and the American people. According to ATO’s performance reports, the organization showed nearly steady improvement in fiscal years 2003 through 2006 and substantially exceeded its targets for those years, twice hitting 100 percent. (See table 2.) However, when performance is measured against original baselines instead of annual budgets or milestones, acquisition performance was lower than reported, but still showed a general trend of improvement for fiscal years 2003 through 2006. In fact, even when measured against original baselines, ATO would have met its goals for budget in fiscal years 2004 through 2006. However, ATO did not perform as well in meeting schedules when measured against original baselines. ATO would have met its schedule goal only in 2005. ATO officials told us they use a number of methods to measure the performance of all of the acquisitions contained in the CIP. They have implemented earned value management on all new major acquisitions as a way to prevent, detect, report, and correct problems in acquiring major systems and to ensure that major programs are within budget and schedule targets. ATO officials also noted that they have monthly meetings on the status of acquisitions and that management is constantly apprised of program performance. In addition, officials stated that significant changes in acquisition status, such as rebaselining, must go through high-level agency and OMB review, and FAA reports to Congress any program that exceeds its baseline costs by more than 50 percent. While ATO has reported acquisitions’ variances against original baselines to Congress on an ad hoc basis in response to questions for the record, the only systematic reporting of ATO’s acquisitions performance to the Congress and the American people is FAA’s annual Performance and Accountability Report. ATO’s establishment of annual goals and subsequent annual reporting of performance are appropriate actions aimed at improving ATO’s performance in managing ATC system acquisitions. Annual goals are used throughout government to keep programs on track. We have noted that such goals illustrate a commitment to achieving immediate, concrete, and measurable results in the near term. However, it also is important to provide decision makers and stakeholders with an overall understanding of program performance in a more holistic sense. We first noted the shortcomings of annual performance goals for acquisitions in 2005, after ATO reported that it met its acquisition performance goals for the first time. We cautioned that, while meeting a 1-year goal was a positive step, annual performance targets should continue to be viewed in the broader context of acquisitions’ original and revised baselines. The 18 rebaselined acquisitions on which ATO reported performance from fiscal year 2003 through 2006 have collectively exceeded their original budget estimates by approximately $4.4 billion, or 66 percent; however, over 95 percent of this increase occurred in the Standard Terminal Automation Replacement System and in WAAS. The 18 rebaselined acquisitions have experienced schedule slippages between 1 and 10 years. Because some of FAA’s current acquisitions form the basic building blocks for NextGen, delays and budget increases in these acquisitions could have significant implications for the transition to NextGen. Figure 3 illustrates the planned transition from current systems to future systems and the anticipated benefits. For example, STARS, discussed previously, is listed as a current program leading to the transition to NextGen in figure 3. Our research disclosed that the near tripling of the acquisition’s budget resulted from insufficient involvement of stakeholders and requirements growth—two systemic factors that we found led to acquisitions missing their budget and schedule targets. However, the budget increases that STARS experienced are not discussed in Performance and Accountability Reports or in any other regular reporting to Congress. Another example is the Airport Surveillance Radar - Model 11 (ASR-11), which is an integrated digital system intended to replace aging analog radars. NextGen’s plans call for the ASR-11 to provide aircraft and weather surveillance in terminal areas of small and medium-sized airports, which also may serve as a part of the back-up surveillance system in case the primary satellite-based ATC system fails. However, the ASR-11 has encountered a 59-percent increase in budget per deployed system and its completion date has been delayed from 2005 to 2009, in part due to requirements growth. As with STARS, the budget increases and schedule delays experienced in the ASR-11 acquisition are not discussed in ATO’s Performance and Accountability Reports or in any routine report to Congress. The absence of original budget and schedule estimates in ATO’s performance reporting could give the impression to Congress and the American people that ATO’s acquisitions and the transition to NextGen are progressing more smoothly than is actually the case. Including original budget and schedule baselines in ATO’s performance reporting could improve the reports’ usefulness by helping Congress and other stakeholders identify trends and take corrective action to ensure that the capacity, efficiency, and safety benefits of NextGen are achieved in a cost- effective and timely manner. Although ATO’s acquisition performance measures meet some of the key attributes of successful performance measures, the attributes that the measures lack are significant and, considered together, raise serious questions about the measures’ validity. While a 1-year focus may be appropriate for some performance measures, it may not provide a valid assessment of performance over time for major ATC acquisitions that span a number of years. Moreover, ATO’s use of subjective criteria to pick a subset of acquisitions and milestones for performance measurement and its lack of disclosure regarding rebaselining may not provide Congress, aviation stakeholders, and the public with a complete picture of ATO’s ability to deliver major ATC acquisitions on budget and on time. Such reporting could also make budget increases and schedule delays more difficult to identify. These issues are critical as ATO begins acquiring new systems with a goal of completing the transition to NextGen by 2025. Recognizing impending budget increases and schedule delays and taking corrective action will be necessary to keep the overall NextGen effort on track. The more quickly ATO can transition to NextGen, the more quickly the nation will realize the increased efficiencies and safety benefits of new systems and technologies, and avoid the costs and inefficiencies of maintaining existing systems. By presenting the most accurate and complete assessment possible when reporting its performance in acquiring ATC systems, FAA will better facilitate congressional understanding and oversight of FAA’s progress in implementing NextGen. Because of the importance of ensuring that key administration and congressional decision makers and stakeholders have complete information on the budget and schedule performance of FAA’s critical ATC acquisition programs—both for the most recent fiscal year and since their inception—we are recommending that the Secretary of Transportation direct the FAA Administrator to take the following four actions: 1. Improve the objectivity, reliability, and inclusion of core programs in ATO’s acquisition performance measures by establishing written, objective criteria and guidance for managers to use in determining which programs are major—and thus selected for performance reporting—and in selecting schedule milestones. 2. Improve the clarity of ATO’s annual acquisition performance measurement process by disclosing in its Performance and Accountability Reports that the measurement for on-budget performance covers 8 months and is measured against the most recently approved budget baselines. Similarly, improve the wording of the target and reporting for on-schedule acquisitions to disclose that this measures 1 year of performance against selected program milestones. 3. Identify or establish a vehicle for regularly reporting to Congress and the public on ATO’s overall, long-term performance in acquiring ATC systems by providing original budget and schedule baselines for each rebaselined program and the reasons for the rebaselining. If this information is not added to FAA’s annual Performance and Accountability Report, then the Performance and Accountability Report should reference where this information can be found. 4. Improve the usefulness of ATO’s acquisition performance reporting by including information (in the Performance and Accountability Report or elsewhere) on the potential effects that any budget or schedule slippages could have on the overall transition to NextGen. This also could include information concerning any mitigation plans ATO has developed to lessen the effects of program slippages on the implementation of NextGen systems. We provided a draft of this report to the Department of Transportation for comment. Senior officials from ATO’s Office of Finance provided oral comments. ATO officials generally concurred with our recommendations and noted that they already are considering some changes to their performance measurement and reporting process for system acquisitions. In view of the standards discussed in the report, ATO officials agreed to review current selection criteria of programs included for annual reporting to address concerns over objectivity. ATO agreed to clarify wording in the Flight Plan and future Performance and Accountability Reports to ensure that readers understand that the report reflects agency performance for the prior fiscal year only. In our draft report, we recommended that ATO endeavor to report on the overall, long-term status of its acquisitions in its annual Performance and Accountability Report. ATO officials felt strongly that the Performance and Accountability Report is meant to reflect performance for a single fiscal year and would not be the proper vehicle for reporting on long-term performance. In response, we modified our recommendation to be less prescriptive about where this information appears, as long as it is publicly reported. ATO officials said they would consider other reporting methods to provide Congress with longer-term status information about the organization’s performance in acquiring ATC systems. ATO officials also provided technical comments that were incorporated throughout this report, as appropriate. We are sending copies of this report to the appropriate congressional committees, the Secretary of Transportation, and the FAA Administrator. We will make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-2834 or dillinghamg@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. We examined (1) how the Air Traffic Organization (ATO) establishes goals and performance measures for acquiring air traffic control (ATC) systems and how they are reported; (2) how ATO’s acquisition performance measures compare with key attributes of successful performance measures; and (3) the implications of using ATO’s existing performance measures to assess progress in the transition to the Next Generation Air Transportation System (NextGen). To determine how ATO established acquisition goals and performance measures for acquiring ATC systems, we reviewed Federal Aviation Administration’s (FAA) Flight Plans and Acquisition Management System policy and obtained historical budget and schedule data from ATO’s finance office on the acquisitions for which ATO reported performance from fiscal years 2003 through 2006. To ensure that these data were consistent with documents obtained in previous GAO work, we noted potential discrepancies and obtained clarifying documents from ATO’s finance office. We also discussed ATO’s acquisition goals and performance measurement process with ATO officials. To determine how performance is reported, we reviewed agency performance and accountability reports and discussed ATO’s criteria for selecting acquisitions for performance reporting with ATO officials. To determine how ATO’s acquisition performance measures compare with key attributes of successful performance measures, we used eight key attributes of successful performance measures that were previously identified by GAO as criteria for comparison against ATO’s acquisition performance measures. The eight key attributes are: 1. Linkage. Measure is aligned with division- and agencywide goals and mission and clearly communicated throughout the organization. 2. Measurable target. Measure has a numerical goal. 3. Limited overlap. Measure provides new information beyond that provided by other measures. 4. Governmentwide priorities. Each measure covers a priority such as quality, timeliness, and cost of service. 5. Objectivity. Measure is reasonably free from significant bias or manipulation. 6. Reliability. Measure produces the same result under similar conditions. 7. Core program activities. Measure covers the activities that an entity is expected to perform to support the intent of the program. 8. Clarity. Measure is clearly stated and the name and definition are consistent with the methodology used to calculate it. There was a ninth key attribute identified by GAO that we determined was not applicable to our study of ATO’s acquisition performance measures. This ninth attribute is balance, which exists when a suite of measures ensures that an organization’s various priorities are covered. Although ATO has other performance measures that it applies to its acquisitions, in this report we focused only on the two that FAA uses to report its performance—the percentages of acquisitions on budget and acquisitions on schedule. Because we did not examine ATO’s full suite of performance measures for acquisitions, we did not consider the key attribute of balance. We compared attributes of ATO’s acquisition performance measures against each of the eight key attributes to determine whether and how ATO’s process met each attribute. We reviewed past GAO reports on FAA’s management of the ATC modernization program, FAA’s management of major acquisition programs, and the Department of Defense’s acquisition management and reporting. We identified acquisition programs whose targets and milestones were revised or rebaselined to determine validity and consistency in program performance reporting. We also interviewed ATO officials. Additionally, we obtained the perspectives of five aviation experts on the reasonableness of ATO’s acquisition performance measures. To ensure that we collectively received a balanced and unbiased perspective, we selected experts with varying government and industry experience. We asked each expert to address the same set of questions relating to the reasonableness of ATO’s acquisition performance measurement process. To determine the implications of using ATO’s existing performance measures to assess progress in the transition to NextGen, we analyzed the trends for budget and schedule outcomes between the original baselines and current budget and schedule baselines for the acquisitions that ATO selected for performance reporting and monitoring between fiscal years 2003 and 2006. We also drew upon past work in which we undertook detailed reviews of the status of ATC acquisition programs, and obtained updated information as necessary from FAA by reviewing documents and interviewing agency officials. Through discussions with ATO officials, we determined that these data were sufficiently reliable for the purposes of our report. We did not conduct an individual or in-depth review of the effectiveness of the specific programs selected for performance reporting. We also did not identify a comprehensive list of programs that were excluded from acquisition performance reporting. This was beyond the scope and intent of this study. We conducted our work from January 2007 through December 2007 in accordance with generally accepted government auditing standards. Appendix II: Baseline History for Programs Selected for Acquisition Performance Measurement $48.1 The full name of the acquisition programs listed above are as follows: STARS: Standard Terminal Automation Replacement System NEXCOM: Next Generation Air-to-Ground Communication System OASIS: Operational and Supportability Implementation System ITWS: Integrated Terminal Weather System WAAS: Wide Area Augmentation System FTI: FAA Telecommunications Infrastructure ASWON: Aviation Surface Weather Observation Network NIMS II: National Airspace System Infrastructure Management System-Phase 2 WARP: Weather and Radar Processor RCE: Radio Control Equipment ATCBI: Air Traffic Control Beacon Interrogator Replacement ASR-11: Airport Surveillance Radar - Model 11 LAAS: Local Area Augmentation System HOCSR: HOST/Oceanic Computer System Replacement AMASS: Airport Movement Area Safety System LLWAS: Low Level Wind-shear Alert System ASDE-X: Airport Surface Detection Equipment – Model X UHF Replace: Ultra High Frequency Replacement CPDLC: Controller-Pilot Data Link Communications BUEC: Back-Up Emergency Communications ATOP: Advanced Technologies and Oceanic Procedures PRM: Precision Runway Monitor ECG: En Route Communication Gateway URET: User Request Evaluation Tool TMA: Traffic Management Advisor ERAM: En Route Automation Modernization En Route System Mod: En Route Control Center System Modernization TFM-I: Traffic Flow Management-Infrastructure VRRP Next Generation: Voice Recorder Replacement Program Next Generation WSP Tech Refresh: Weather Systems Processor Tech Refresh VSCS Tech Refresh Phase 2: Voice Switching and Control System Tech Refresh Phase 2 0.9 million for the ASDE-X baseline approved in June 2002, which added ASDE-X capabilities to seven ASDE- sites. The ASDE-X and ASDE-X acquisitions were combined in the September 2005 rebaselining. 1. Commercial air carrier fatal accident rate 2. General aviation fatal accidents 3. General aviation Alaska accidents 4. Runway incursions (rate) 5. Commercial space launch accidents 6. Operational errors (rate) 7. Safety risk management (number of changes) 8. Average daily airport capacity (35 Operational Evolution Plan (OEP) airports) 9. Average daily airport capacity (eight metropolitan areas) 10. Annual service volume 11. Adjusted operational availability (35 OEP airports) 12. National airspace system on-time arrivals 13. Noise exposure 14. Aviation fuel efficiency 15. Aviation safety leadership 16. Bilateral safety agreements 17. External funding 18. Global positioning system-based technologies 19. Employee attitude survey (cumulative percent increase) 20. Cost control (number of activities per organization) 21. Critical acquisitions on budget 22. Critical acquisitions on schedule 23. Information security 24. Customer satisfaction (American Customer Satisfaction Index) 25. Cost-reimbursable contracts 26. Mission-critical positions 27. Reducing workplace injuries 28. Clean audit with no material weaknesses 29. Grievance processing time 30. Air traffic controller hiring plan (within 5 percent of plan) In addition to the contact named above, key contributors to this report were Faye Morrison (Assistant Director), David Best, Elizabeth Curda, Elizabeth Eisenstadt, David Hooper, Edmond Menoche, Sara Ann Moessbauer, Colleen Phillips, and Taylor Reeves.
Acquiring new systems on budget and on schedule is critically important in transitioning to the Next Generation Air Transportation System (NextGen). However, air traffic control modernization has been on GAO's high-risk list since 1995, in part due to acquisitions exceeding budget and schedule targets. The Federal Aviation Administration's (FAA) Air Traffic Organization (ATO) has responsibility for managing air traffic control acquisitions. GAO was asked to examine (1) ATO's goals, performance measures, and reporting for systems acquisitions; (2) the validity of ATO's performance measures; and (3) the implications of using ATO's performance measures to assess progress in transitioning to NextGen. To address these issues, GAO compared ATO's measures with attributes of successful performance measures, interviewed agency officials, and sought perspectives of aviation experts. To be consistent with federal guidance and with targets set in the Department of Transportation's strategic plan, ATO established annual acquisition goals and performance measures that call for a high percentage of its major acquisitions to be within 10 percent of budget and on schedule. ATO identifies major acquisitions and reports performance against its goals using its most recently approved budget and schedule estimates. To measure on-budget performance, ATO calculates budget increases over an 8-month period--between January and August of each year. To measure on-schedule performance, ATO selects a minimum of two annual milestones from its major acquisitions and calculates the percentage of milestones that are on schedule. Because ATO's acquisition performance measures lack objectivity, reliability, coverage of core activities, and clarity, and focus only on the preceding year, they may not provide a valid assessment of performance over time. On the positive side, the measures are aligned with FAA's strategic objectives, are measurable, have no overlap, and address governmentwide priorities. However, the performance measures lack objectivity because ATO has no objective criteria for designating which programs are "major" and should be selected for performance reporting. This makes it possible for subjective considerations to dominate the outcome and leaves the performance measures vulnerable to bias in the selection of programs for reporting. The lack of objective criteria for designating major programs also impairs the reliability of the measures (the ability of the measures to produce the same results each time they are applied under similar conditions) and undermines assurance that ATO managers include all core program activities in performance reporting each year. The performance measures also lack clarity in that they do not indicate that ATO measures the performance of many acquisitions against the most recently approved budget and schedule estimates rather than the original estimates. ATO's acquisition performance measurement and reporting could mask budget increases and schedule delays that could have a negative effect on the transition to NextGen. Although ATO reported performance that exceeded its goals for fiscal years 2004 through 2006 and showed nearly steady improvement, when measured against original baselines, acquisition performance improved but was lower than reported. Going forward, the absence of original budget and schedule information on ATO's acquisitions could give the impression that the transition to NextGen is progressing more smoothly than might actually be the case. It will be important for ATO and Congress to recognize budget increases and schedule delays so that the capacity, efficiency, and safety benefits of NextGen can be realized in a cost-efficient and timely fashion.
As computer technology has advanced, both government and private entities have become increasingly dependent on computerized information systems to carry out operations and to process, maintain, and report essential information. Public and private organizations rely on computer systems to transmit proprietary and other sensitive information, develop and maintain intellectual capital, conduct operations, process business transactions, transfer funds, and deliver services. In addition, the Internet has grown increasingly important to American business and consumers, serving as a medium for hundreds of billions of dollars of commerce each year, and has developed into an extended information and communications infrastructure that supports vital services such as power distribution, health care, law enforcement, and national defense. Ineffective protection of these information systems and networks can result in a failure to deliver these vital services, and result in loss or theft of computer resources, assets, and funds; inappropriate access to and disclosure, modification, or destruction of sensitive information, such as national security information, PII, and proprietary business information; disruption of essential operations supporting critical infrastructure, national defense, or emergency services; undermining of agency missions due to embarrassing incidents that erode the public’s confidence in government; use of computer resources for unauthorized purposes or to launch attacks on other systems; damage to networks and equipment; and high costs for remediation. Risks to cyber-based assets can originate from unintentional or intentional threats. Unintentional threats can be caused by, among other things, natural disasters, defective computer or network equipment, and careless or poorly trained employees. Intentional threats include both targeted and untargeted attacks from a variety of sources, including criminal groups, hackers, disgruntled employees, foreign nations engaged in espionage and information warfare, and terrorists. These adversaries vary in terms of their capabilities, willingness to act, and motives, which can include seeking monetary gain or a political, economic, or military advantage. For example, adversaries possessing sophisticated levels of expertise and significant resources to pursue their objectives—sometimes referred to as “advanced persistent threats”— pose increasing risks. They make use of various techniques— or exploits—that may adversely affect federal information, computers, software, networks, and operations. Since fiscal year 2006, the number of information security incidents affecting systems supporting the federal government has steadily increased each year: rising from 5,503 in fiscal year 2006 to 67,168 in fiscal year 2014, an increase of 1,121 percent (see fig. 1). Furthermore, the number of reported security incidents involving PII at federal agencies has more than doubled in recent years—from 10,481 incidents in fiscal year 2009 to 27,624 incidents in fiscal year 2014. These incidents and others like them can adversely affect national security; damage public health and safety; and lead to inappropriate access to and disclosure, modification, or destruction of sensitive information. Recent examples highlight the impact of such incidents: In June 2015, OPM reported that an intrusion into its systems affected personnel records of about 4 million current and former federal employees. The Director of OPM also stated that a separate incident may have compromised OPM systems related to background investigations, but its scope and impact have not yet been determined. In June 2015, the Commissioner of the Internal Revenue Service (IRS) testified that unauthorized third parties had gained access to taxpayer information from its “Get Transcript” application. According to IRS, criminals used taxpayer-specific data acquired from non-IRS sources to gain unauthorized access to information on approximately 100,000 tax accounts. These data included Social Security information, dates of birth, and street addresses. In April 2015, the Department of Veterans Affairs (VA) Office of Inspector General reported that two VA contractors had improperly accessed the VA network from foreign countries using personally owned equipment. In February 2015, the Director of National Intelligence stated that unauthorized computer intrusions were detected in 2014 on OPM’s networks and those of two of its contractors. The two contractors were involved in processing sensitive PII related to national security clearances for federal employees. In September 2014, a cyber-intrusion into the United States Postal Service’s information systems may have compromised PII for more than 800,000 of its employees. Given the risks posed by cyber threats and the increasing number of incidents, it is crucial that federal agencies take appropriate steps to secure their systems and information. We and agency inspectors general have identified challenges in protecting federal information and systems, including those in the following key areas: Designing and implementing risk-based cybersecurity programs at federal agencies. Agencies continue to have shortcomings in assessing risks, developing and implementing security controls, and monitoring results. Specifically, for fiscal year 2014, 19 of the 24 federal agencies covered by the Chief Financial Officers (CFO) Actreported that information security control deficiencies were either a material weakness or a significant deficiency in internal controls over their financial reporting.agencies cited information security as a major management challenge for their agency. Moreover, inspectors general at 23 of the 24 As we testified in April 2015, for fiscal year 2014, most of the agencies had weaknesses in the five key security control categories. These control categories are (1) limiting, preventing, and detecting inappropriate access to computer resources; (2) managing the configuration of software and hardware; (3) segregating duties to ensure that a single individual does not have control over all key aspects of a computer-related operation; (4) planning for continuity of operations in the event of a disaster or disruption; and (5) implementing agency-wide security management programs that are critical to identifying control deficiencies, resolving problems, and managing risks on an ongoing basis. (See fig. 2.) Examples of these weaknesses include: (1) granting users access permissions that exceed the level required to perform their legitimate job-related functions; (2) not ensuring that only authorized users can access an agency’s systems; (3) not using encryption to protect sensitive data from being intercepted and compromised; (4) not updating software with the current versions and latest security patches to protect against known vulnerabilities; and (5) not ensuring employees were trained commensurate with their responsibilities. GAO and agency inspectors general have made hundreds of recommendations to agencies aimed at improving their implementation of these information security controls. Enhancing oversight of contractors providing IT services. In August 2014, we reported that five of six agencies we reviewed were inconsistent in overseeing assessments of contractors’ implementation of security controls. This was partly because agencies had not documented IT security procedures for effectively overseeing contractor performance. In addition, according to OMB, 16 of 24 agency inspectors general determined that their agency’s program for managing contractor systems lacked at least one required element. We recommended that OMB, in conjunction with DHS, develop and clarify guidance to agencies for annually reporting the number of contractor-operated systems and that the reviewed agencies establish and implement IT security oversight procedures for such systems. OMB did not comment on our report, but the agencies generally concurred with our recommendations. Improving security incident response activities. In April 2014, we reported that the 24 agencies did not consistently demonstrate that they had effectively responded to cyber incidents. Specifically, we estimated that agencies had not completely documented actions taken in response to detected incidents reported in fiscal year 2012 in about 65 percent of cases. In addition, the 6 agencies we reviewed had not fully developed comprehensive policies, plans, and procedures to guide their incident response activities. We recommended that OMB address agency incident response practices government-wide and that the 6 agencies improve the effectiveness of their cyber incident response programs. The agencies generally agreed with these recommendations. We also made two recommendations to DHS concerning government-wide incident response practices. DHS concurred with the recommendations and, to date, has implemented one of them. Responding to breaches of PII. In December 2013, we reported that eight federal agencies had inconsistently implemented policies and procedures for responding to data breaches involving PII. In addition, OMB requirements for reporting PII-related data breaches were not always feasible or necessary. Thus, we concluded that agencies may not be consistently taking actions to limit the risk to individuals from PII-related data breaches and may be expending resources to meet OMB reporting requirements that provide little value. We recommended that OMB revise its guidance to agencies on responding to a PII-related data breach and that the reviewed agencies take specific actions to improve their response to PII-related data breaches. OMB neither agreed nor disagreed with our recommendation; four of the reviewed agencies agreed, two partially agreed, and two neither agreed nor disagreed. Implementing security programs at small agencies. In June 2014, we reported that six small agencies (i.e., agencies with 6,000 or fewer employees) had not implemented or not fully implemented their information security programs. For example, key elements of their plans, policies, and procedures were outdated, incomplete, or did not exist, and two of the agencies had not developed an information security program with the required elements. We recommended that OMB include a list of agencies that did not report on the implementation of their information security programs in its annual report to Congress on compliance with the requirements of FISMA, and include information on small agencies’ programs. OMB generally concurred with our recommendations. We also recommended that DHS develop guidance and services targeted at small agencies. DHS has implemented this recommendation. Until federal agencies take actions to address these challenges— including implementing the hundreds of recommendations we and inspectors general have made—federal systems and information will be at an increased risk of compromise from cyber-based attacks and other threats. In addition to the efforts of individual agencies, DHS and OMB have several initiatives under way to enhance cybersecurity across the federal government. While these initiatives all have potential benefits, they also have limitations. Personal Identity Verification: In August 2004, Homeland Security Presidential Directive 12 ordered the establishment of a mandatory, government-wide standard for secure and reliable forms of identification for federal government employees and contractor personnel who access government-controlled facilities and information systems. Subsequently, the National Institute of Standards and Technology (NIST) defined requirements for such personal identity verification (PIV) credentials based on “smart cards”—plastic cards with integrated circuit chips to store and process data—and OMB directed federal agencies to issue and use PIV credentials to control access to federal facilities and systems. In September 2011, we reported that OMB and the eight agencies in our review had made mixed progress for using PIV credentials for controlling access to federal facilities and information systems. We attributed this mixed progress to a number of obstacles, including logistical problems in issuing PIV credentials to all agency personnel and agencies not making this effort a priority. We made several recommendations to the eight agencies and to OMB to more fully implement PIV card capabilities. Although two agencies did not comment, seven agencies agreed with our recommendations or discussed actions they were taking to address them. For example, we made four recommendations to DHS, who concurred and has taken action to implement them. In February 2015, OMB reported that, as of the end of fiscal year 2014, only 41 percent of agency user accounts at the 23 civilian CFO Act agencies required PIV cards for accessing agency systems. Continuous Diagnostics and Mitigation (CDM): According to DHS, this program is intended to provide federal departments and agencies with capabilities and tools that identify cybersecurity risks on an ongoing basis, prioritize these risks based on potential impacts, and enable cybersecurity personnel to mitigate the most significant problems first. These tools include sensors that perform automated searches for known cyber vulnerabilities, the results of which feed into a dashboard that alerts network managers. These alerts can be prioritized, enabling agencies to allocate resources based on risk. DHS, in partnership with the General Services Administration, has established a government-wide contract that is intended to allow federal agencies (as well as state, local, and tribal governmental agencies) to acquire CDM tools at discounted rates. In July 2011, we reported on the Department of State’s (State) implementation of its continuous monitoring program, referred to as iPost.visibility over information security at the department and helped IT administrators identify, monitor, and mitigate information security weaknesses. However, we also noted limitations and challenges with State’s approach, including ensuring that its risk-scoring program identified relevant risks and that iPost data were timely, complete, and accurate. We made several recommendations to improve the implementation of the iPost program, and State partially agreed. We determined that State’s implementation of iPost had improved National Cybersecurity Protection System (NCPS): The National Cybersecurity Protection System, operationally known as “EINSTEIN,” is a suite of capabilities intended to detect and prevent malicious network traffic from entering and exiting federal civilian government networks. The EINSTEIN capabilities of NCPS are described in table 1. In March 2010, we reported that while agencies that participated in EINSTEIN 1 improved their identification of incidents and mitigation of attacks, DHS lacked performance measures to understand if the initiative was meeting its objectives. We made four recommendations regarding the management of the EINSTEIN program, and DHS has since taken action to address them. Currently, we are reviewing NCPS, as mandated by Congress. The objectives of our review are to determine the extent to which (1) NCPS meets stated objectives, (2) DHS has designed requirements for future stages of the system, and (3) federal agencies have adopted the system. Our final report is expected to be released later this year, and our preliminary observations include the following: DHS appears to have developed and deployed aspects of the intrusion detection and intrusion prevention capabilities, but potential weaknesses may limit their ability to detect and prevent computer intrusions. For example, NCPS detects signature anomalies using only one of three detection methodologies identified by NIST (signature-based, anomaly-based, and stateful protocol analysis). Further, the system has the ability to prevent intrusions, but is currently only able to proactively mitigate threats across a limited subset of network traffic (i.e., Domain Name System traffic and e- mail). DHS has identified a set of NCPS capabilities that are planned to be implemented in fiscal year 2016, but it does not appear to have developed formalized requirements for capabilities planned through fiscal year 2018. The NCPS intrusion detection capability appears to have been implemented at 23 CFO Act agencies.capability appears to have limited deployment, at portions of only 5 of these agencies. Deployment may have been hampered by various implementation and policy challenges. The intrusion prevention In conclusion, the danger posed by the wide array of cyber threats facing the nation is heightened by weaknesses in the federal government’s approach to protecting its systems and information. While recent government-wide initiatives hold promise for bolstering the federal cybersecurity posture, it is important to note that no single technology or set of practices is sufficient to protect against all these threats. A “defense in depth” strategy is required that includes well-trained personnel, effective and consistently applied processes, and appropriately implemented technologies. While agencies have elements of such a strategy in place, more needs to be done to fully implement it and to address existing weaknesses. In particular, implementing GAO and inspector general recommendations will strengthen agencies’ ability to protect their systems and information, reducing the risk of a potentially devastating cyber attack. Chairman Ratcliffe, Ranking Member Richmond, and Members of the Subcommittee, this concludes my statement. I would be happy to answer any questions you may have. If you have any questions about this statement, please contact Gregory C. Wilshusen at (202) 512-6244 or wilshuseng@gao.gov. Other staff members who contributed to this statement include Larry Crosland and Michael Gilmore (assistant directors), Bradley Becker, Christopher Businsky, Nancy Glover, Rosanna Guerrero, Kush Malhotra, and Lee McCracken. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Effective cybersecurity for federal information systems is essential to preventing the loss of resources, the compromise of sensitive information, and the disruption of government operations. Federal information and systems face an evolving array of cyber-based threats, and recent data breaches at federal agencies highlight the impact that can result from ineffective security controls. Since 1997, GAO has designated federal information security as a government-wide high-risk area, and in 2003 expanded this area to include computerized systems supporting the nation's critical infrastructure. This year, in GAO's high-risk update, the area was further expanded to include protecting the privacy of personal information that is collected, maintained, and shared by both federal and nonfederal entities. This statement summarizes (1) challenges facing federal agencies in securing their systems and information and (2) government-wide initiatives, including those led by DHS, aimed at improving cybersecurity. In preparing this statement, GAO relied on its previously published and ongoing work in this area. GAO has identified a number of challenges federal agencies face in addressing threats to their cybersecurity, including the following: Designing and implementing a risk-based cybersecurity program. Enhancing oversight of contractors providing IT services. Improving security incident response activities. Responding to breaches of personal information. Implementing cybersecurity programs at small agencies. Until federal agencies take actions to address these challenges—including implementing the hundreds of recommendations GAO and agency inspectors general have made—federal systems and information, including sensitive personal information, will be at an increased risk of compromise from cyber-based attacks and other threats. In an effort to bolster cybersecurity across the federal government, several government-wide initiatives, spearheaded by the Department of Homeland Security (DHS) and the Office of Management and Budget (OMB), are under way. These include the following: Personal Identity Verification: In 2004, the President directed the establishment of a government-wide standard for secure and reliable forms of ID for federal employees and contractor personnel who access government facilities and systems. Subsequently, OMB directed agencies to issue personal identity verification credentials to control access to federal facilities and systems. OMB recently reported that only 41 percent of user accounts at 23 civilian agencies had required these credentials for accessing agency systems. Continuous Diagnostics and Mitigation: DHS, in collaboration with the General Services Administration, has established a government-wide contract for agencies to purchase tools that are intended to identify cybersecurity risks on an ongoing basis. These tools can support agencies' efforts to monitor their networks for security vulnerabilities and generate prioritized alerts to enable agency staff to mitigate the most critical weaknesses. The Department of State adopted a continuous monitoring program, and in 2011 GAO reported on the benefits of the program and challenges the department faced in implementing its approach. National Cybersecurity Protection System (NCPS): This system, also referred to as EINSTEIN, is to include capabilities for monitoring network traffic and detecting and preventing intrusions, among other things. GAO has ongoing work reviewing the implementation of NCPS, and preliminary observations indicate that implementation of the intrusion detection and prevention capabilities may be limited and DHS appears to have not fully defined requirements for future capabilities. While these initiatives are intended to improve security, no single technology or tool is sufficient to protect against all cyber threats. Rather, agencies need to employ a multi-layered, “defense in depth” approach to security that includes well-trained personnel, effective and consistently applied processes, and appropriate technologies. In previous work, GAO and agency inspectors general have made hundreds of recommendations to assist agencies in addressing cybersecurity challenges. GAO has also made recommendations to improve government-wide initiatives.
As of December 2014, DOD’s portfolio of major defense acquisition programs included 78 programs with a total estimated acquisition cost of roughly $1.4 trillion. The Under Secretary of Defense for Acquisition, Technology and Logistics is the defense acquisition executive, and for 38 of these 78 programs, the Under Secretary is the milestone decision authority, responsible for making decisions at major program milestones. These programs are referred to as Acquisition Category (ACAT) ID programs. For the remaining 40 programs, most of which are in production, the Under Secretary has delegated milestone decision making authority to the cognizant military service acquisition executive; these programs are referred to as ACAT IC programs. DOD also has programs that have not entered the engineering and manufacturing development phase. These programs are not yet part of the portfolio, but are expected to enter soon. For these programs, the Under Secretary is normally the milestone decision authority. In DOD’s acquisition process, weapon system programs typically proceed through three major milestones—A, B, and C—where program offices provide information to the milestone decision authority in order to make a decision on whether the program is ready to transition to the next acquisition phase. The milestones normally represent transition points in the overall acquisition process where there is a marked increase in the resources required for the program. Milestone A is the decision for an acquisition program to enter into the technology maturation and risk reduction phase; Milestone B is the decision to enter the engineering and manufacturing development phase; and Milestone C is the decision to enter the production and deployment phase. Figure 1 depicts DOD’s acquisition process. DOD’s acquisition process is managed and supported by officials at different hierarchical levels. Weapon system program managers typically report to program executive officers in each military service who are charged with overseeing the execution of a portfolio of related systems such as fighter aircraft or ships. Program executive officers, in turn, typically report to a military service acquisition executive, who reports to the defense acquisition executive. As part of the milestone decision process, programs are reviewed at each level before reaching the milestone decision authority. Figure 2 shows the different levels. Statutes and DOD policy require the documentation of specific information on major defense acquisition programs at each acquisition milestone. Our review focused on the information required at Milestone B, most of which is also expected at Milestone C. Appendix II includes a list and description of these information requirements. While several different Office of the Secretary of Defense organizations and other organizations have responsibility for compiling and documenting the information, the majority of the responsibility rests with the program office managing the acquisition. For nearly 20 years, GAO has examined the best practices for product development from over 40 commercial firms to identify potential opportunities for DOD to adopt and implement those practices. For this review, we visited five leading commercial firms that follow a gated or milestone process in developing new products. While their business models are different than DOD’s, and often their products are less technically complex, commercial firms share a common goal with DOD in delivering their products to their customer on time and within cost estimates. Leading commercial firms can provide alternative approaches for milestone decision processes. Programs we surveyed spent on average over 2 years completing the steps necessary to document up to 49 information requirements for their most recent acquisition milestone. This includes the time for the program office to develop the documentation and for various stakeholders to review and approve the documentation. These 49 information requirements also took, in total, on average 5,600 staff days for programs to document. However, on average, almost half of these requirements, 24 of the 49, were not highly valued by the acquisition officials we surveyed. Four major defense acquisition programs we examined illustrate the challenges in completing the milestone decision process. Programs can spend a significant amount of time documenting up to 49 information requirements in advance of a Milestone B or C review. The requirements cover a vast array of program information, such as information on the overall acquisition strategy to justify the business case for a program; detailed implementation plans, such as those for systems engineering and testing; informational reports, analysis, and assessments; and decisions and certifications. We surveyed 24 program managers that held a milestone B or C decision since 2010 and found that it took them over 2 years on average to complete the entire set of documents needed for the milestone decision. The program managers, as well as other acquisition officials we surveyed, considered on average about half of the information requirements as not highly valued. Figure 3 provides a summary of this information. More details about the survey results are presented in appendix III. Programs spent an average of about 1 year to complete each information requirement. However, as shown in figure 3, there was a wide range in the length of time it took to complete documentation, as some took almost 2 years to complete and some took less than 6 months. About half of the time for each information requirement was spent documenting the information and the other half for review. These 49 requirements also took, in total, on average 5,600 staff days for programs to document. We did not ask programs to provide data on the staff days needed to review and approve the documentation because they do not have access to data on the amount of time officials at levels above them spend completing this process. As shown in figure 3, acquisition officials on average considered 24 requirements as providing high value to their organization’s role in the milestone decision process, 20 requirements as providing moderate value, and 5 requirements as providing less than moderate value. Information requirements considered high value by stakeholders include a program’s acquisition strategy, sustainment plan, and information related to planned technologies, cost, and testing. Several senior acquisition officials we met with considered many of these requirements as critical to the program’s business case,the capabilities required of the weapon system, the strategy for acquiring the weapon system, and the cost, schedule, and performance baselines. which typically includes documentation on Information requirements valued the least (less than moderate value), on the other hand, include such documentation as the benefit analysis and determination for potentially bundling contract requirements; the Clinger- Cohen certification for information technology investments; the corrosion prevention control plan to assess the impact of corrosion on cost, availability, and safety of equipment; the item unique identification implementation plan for managing assets; and the replaced system sustainment plan for documenting the estimated cost to sustain a system until the new program is fielded. One service acquisition executive, for example, stated that program managers should not have to develop an item unique identification implementation plan because government contractors put the unique identification numbers on parts. Another senior official stated that the Clinger-Cohen Act requirements are geared towards the acquisition environment of the 1990s. This official believes the requirements should be updated to reflect the current environment for procuring information systems. As part of the process of documenting the information required at the milestones, program officials brief cognizant officials responsible for the different functional areas, such as test or systems engineering, as well as senior leadership and the milestone decision authority on specific aspects of the program’s overall plans. The briefings, done in parallel with the actual process of documenting the required information, are used as a forum for DOD to discuss the information and to determine a program’s readiness for the milestone decision. Program offices can spend a great deal of time and effort briefing the different officials and senior leaders in advance of the milestone decision. Data provided by 9 of the programs we surveyed that recently had a milestone B decision showed that programs provided an average of 55 briefings over a period of just over a year and a half leading up to the milestone. We examined four major defense acquisition programs, at least one from each military service that recently held a milestone decision, to get more specific details of the time and effort expended by programs to complete the milestone decision process. All four programs needed about 24 months to complete the process. While the number of documents varied for each program, it took an average of over 13 months to complete each document based on three programs that could provide data. Two of these programs used contractors to provide assistance in completing the documents. Figure 4 provides a summary of the overall effort required of these four programs—two of which were preparing for milestone B and the other two were preparing for milestone C. Two of the programs that tracked the staff days required to prepare the milestone documents told us they spent 3,800 and 9,867 staff days, respectively. These same programs also used contractors to assist with the documents. A primary reason it takes over 2 years to complete the information required for a milestone decision is the large number of stakeholders that review the documents at the many organizational levels above the program office. We found that stakeholders in many different offices among 8 different levels can review the information and documentation needed to support a milestone decision. According to the program offices we surveyed, these reviews added only moderate or less value to most documents. DOD recognizes that it has too many levels of review and has several initiatives to eliminate the acknowledged bureaucracy, but has had limited success implementing changes to reduce the time and effort needed to review documentation. The information and documentation required at milestones can be reviewed by as many as eight different organizational levels before a decision is reached on whether a program is ready for the next acquisition phase. In general, the information is reviewed at each level to gain approval before the program provides the information to the next level. This is done serially, which takes more time. Eventually, the defense acquisition executive and other senior executives review the information and determine whether the program is ready to proceed to the next acquisition phase. Figure 5 shows the multiple levels of reviews. Many different functional organizations within each level review the information before the document is approved. The number of organizations conducting reviews varies depending on the information included in each document. A few documents that include a wide breadth of information can be reviewed by many offices at each level. For example, Air Force acquisition strategies, that on average took over 12 months to complete for the programs we surveyed, can be reviewed by 56 offices, some more than once, before being approved. Figure 6 lists the organizations involved in this review process. The reviews of more narrowly focused documentation also go through different levels, but may be reviewed by fewer organizations at each level. As one example, offices at as many as four levels took an average of 7 months to review a program’s Technology Readiness Assessment, based on responses to our survey. This assessment is prepared prior to Milestone B to show the results of an assessment of the maturity levels of the critical technologies planned to be integrated onto the program. Initially, the program office prepares an assessment of the different technologies’ maturity levels, taking into account the conclusions reached by a panel of independent subject matter experts. Then, the program executive officer reviews and approves the assessment. Next, a service level expert, with possible assistance from a science and technology expert, reviews the assessment. After that, officials from the Office of the Secretary of Defense, Assistant Secretary for Research and Engineering office evaluate the assessment and make their own independent assessment. Finally, the milestone decision authority certifies whether to approve the program to enter engineering and manufacturing development or defer this decision until technologies are mature. Each of these four levels of review, done serially, can present new questions and comments that need to be resolved before the program can satisfy the information requirement. GAO, Defense Acquisitions: Assessments of Selected Weapon Programs, GAO-08-467SP (Washington, D.C.: Mar. 31, 2008); and Defense Acquisitions: Assessments of Major Weapon Programs, GAO-04-248 (Washington, D.C.: Mar. 31, 2004). program results is related instead to incentives. We have reported previously on several factors that create incentives for DOD to deviate from sound acquisition practices and reform initiatives. These factors include (1) mismatches between capability requirements and the knowledge, funding, and time planned to develop a new system, (2) programs being started to fill voids in military capability but quickly evolving to address other, conflicting demands, and (3) programs being funded in a way where there are few consequences if funding is not used efficiently. DOD has recognized that its extensive review process is a challenge. A DOD study in 2011 highlighted the many organizational levels of oversight and said DOD has a “checkers checking checkers” system, which contributes to inefficiencies that can undermine program managers’ execution of programs because they spend too much time complying with the oversight process, including documenting the information requirements.time and resources addressing conflicting comments/concerns expressed by the functional offices at the different levels during the review process. Officials also told us the functional staff conducting reviews typically wanted significantly more information than their superiors want or need and this often leads to multiple revisions. For example, the Deputy Assistant Secretary for Systems Engineering has indicated he wants limited, specific information in a systems engineering plan and even issued guidance to promulgate this direction. Despite this direction, we were told the systems engineering plan for one Navy program grew from 100 pages to 243 pages in length, because staff wanted additional information added as it went through the review process. In contrast, however, one of the three service acquisition executives we surveyed and some senior level officials within the Office of the Secretary of Defense stated that staff reviews are helpful as they ensure the documentation is sufficient before executives at each level perform their review. Several program officials told us they spend extensive Service officials also told us that while it is important to get input from functional staffs on their areas of expertise, these staffs can have “tunnel vision,” or focus only on their respective area and do not adequately consider whether their recommended changes to documentation might add schedule time, additional costs, or have other effects on a program. Officials expressed frustration that functional staffs are not held accountable for the potential effect on a program as a result of their recommended changes. Recently, the Under Secretary of Defense for Acquisition, Technology and Logistics has tried to clarify the role of some staff, stating in a memorandum that the service acquisition executives, the program executive officers, and program managers are responsible and accountable for the programs they manage; everyone else (i.e., staff supporting the Office of the Secretary of Defense staff) has a supporting or advisory role. While there are multiple levels and many organizations involved in reviews, overall the 24 program managers we surveyed did not think these reviews added significant value to the documentation. The program managers considered the value added to 10 percent of the documentation to be high. However, for the remaining 90 percent of the documents, the officials believed the reviews did not add high value—61 percent were moderate and 29 percent less than moderate. Figure 7 provides a summary of the program offices’ assessment. Of the 14 documentation reviews that were considered to add less than moderate value, 2 documents were reviewed for an average of 10 months each and the others ranged between 2.5 and 8.5 months. Other service level officials we surveyed—program executive officers and the three service acquisition executives—had views similar to the program managers; they considered the value added to be high for less than 10 percent of the documentation. DOD has acknowledged that too much time is spent on reviews and preparing documents and has taken some steps over the past several years to address some of the unproductive steps identified in its milestone decision processes. For the most part, however, efforts to date have been limited in scope and have not had a significant effect on the amount of time and effort program offices spend on documentation required at milestones. One has even stalled. Examples of these efforts include: In 2011, the Under Secretary for Acquisition, Technology and Logistics delegated the approval authority for three milestone documents from the Office of the Secretary of Defense level to the service level. This reduced the number of levels of review and reviewers of these documents. A DOD official told us the approval authority for additional documents could be delegated in the future, but no additional documents are currently being considered. In 2013, the Under Secretary for Acquisition, Technology and Logistics asked the service acquisition executives to identify programs where the milestone decision authority could be potentially delegated from the Office of the Secretary of Defense to a lower level. Delegation to the lower level also reduces the number of levels of review and reviewers. The services identified 18 programs: 7 programs from the Air Force; 5 programs from the Navy; and 6 programs from the Army. In September 2014 the Under Secretary of Defense for Acquisition, Technology and Logistics delegated the authority to act as the milestone decision authority to the Secretary of the Air Force for 3 programs, the Secretary of the Navy for 1 program, and the Secretary of the Army for 1 program. In April 2013, the Under Secretary of Defense for Acquisition, Technology and Logistics issued guidance that included a potential pilot test of a “skunkworks” process for major defense acquisition programs. The Under Secretary requested that each service recommend one candidate program for a pilot test by July 2013. As of October 2014, programs had not been identified and the effort has been placed on hold. Office of the Secretary of Defense officials stated it has been difficult to identify programs that meet the Under Secretary’s expected preconditions—namely to identify programs that have well defined requirements, a strong relationship with industry, and a highly qualified and appropriately staffed government team that can remain with the program until it is delivered. In 2014, DOD began using an Electronic Coordination Tool designed to electronically disseminate and track the progress of documentation being reviewed. The tool is used to enforce time limits for the review of documents and provide near real-time views of all comments made during the review process to promote greater efficiency across the department. DOD officials have begun using this tool with the Acquisition Strategy and hope to add more documents over time. DOD is currently assessing many of the documents it develops in response to statutory information requirements and plans to propose legislative modifications to Congress in the spring of 2015 to help streamline documentation while still meeting the intentions of the statutes. DOD’s revised acquisition policy has also placed greater emphasis on “tailoring,” which means modifying the traditional acquisition process, including documentation and reviews, to best suit a program’s needs. However, a few program officials told us that trying to tailor by obtaining waivers for milestone requirements involves significant time and effort, and that it is often easier to simply complete the requirements rather than try to obtain waivers. While we did not examine the overall use of tailoring by DOD programs during our review, we examined two programs that attempted tailored documentation and reviews, but in the end, neither was able to make significant changes. Specifically, the F-22 Increment 3.2B program told us they requested waivers for 17 requirements, but ultimately only 2 were waived. In addition, the Long Range Strike– Bomber, in direction provided by the former Secretary of Defense, was to be managed with a streamlined approach. The program was initially allowed the flexibility to tailor many of the needed documents and reviews. However, over time, these flexibilities have been scaled back. DOD has proven it can streamline its process. Several past programs, like the F-16 and F-117, were managed successfully with a more streamlined approach and DOD is currently using a more streamlined milestone decision process for some classified programs. Commercial companies we examined—Boeing, Caterpillar, Cummins, Honda, and Motorola Solutions—also use processes that minimize the levels of review resulting in a quicker, more efficient milestone decision process. In 1971, DOD issued its initial 5000 acquisition policy. The policy, which totaled seven pages, provided for minimum formal reporting and more streamlined layers of authority than the complex process in place today. Specifically, the original guidance provided for (1) minimal layers of authority above the program office; (2) few demands on programs for formal reporting; (3) minimal demands for non-recurring information and for responding to these requests informally; and (4) the development of a single, key document to support program management and milestone decision making. Over time, a large, bureaucratic process has supplanted these elements. For example, requirements have been added to improve cost estimating, logistics planning, design reviews, and technology maturity assessments. Each of these areas has been in great need of improvement and individual documentation and review requirements were aimed at addressing known shortfalls. Several studies by acquisition experts over the past decade have highlighted the need for DOD to again streamline its process. For example, a Defense Acquisition Performance Panel stated in its 2006 report that complex acquisition processes do not promote program success, but increase cost, add to schedules, and obfuscate accountability. The Panel recommended that DOD create a streamlined acquisition organization with accountability assigned and enforced at each level. In 2009, the Defense Science Board reported that DOD’s milestone decision process should take a few days of preparation, not the months and months currently required. The report describes a process with too much bureaucracy, overlap and diffusion of responsibilities, and a need for excessive coordination among acquisition organizations. The report recommended that DOD streamline the acquisition process. The F-16 program, developed in the 1970s, was managed under a streamlined process laid out in the early acquisition guidance. DOD officials have often stated that one contributing factor to the F-16 program’s success was the use of a more streamlined approach, where the number of levels of review and reviewers was minimized, and emphasis was placed on real-time program reviews in lieu of preparing formal reports and documents. According to one former DOD senior official, the program office was staffed with an experienced program manager and functional experts, which worked closely and collaborated with functional offices to achieve a common goal of fielding a usable combat capability as quickly as possible. The F-16 program also operated with different incentives than most programs, which enabled a more streamlined approach. For example, the F-16 was developed as a low- cost fighter with a strategy that involved making incremental technology improvements and incorporating performance trades by the customer to keep costs down. The F-117 aircraft, which was largely developed in the early 1980s in a classified security environment, was managed with a “skunkworks” approach. According to a RAND study, central to the F-117 program approach was its flexibility and responsiveness in decision-making.DOD leadership delegated more decision-making to the program office, with an associated reduction in detailed, document oversight by higher levels. RAND stated that the willingness to delegate decision making authority to lower levels enabled a quicker response to problems. A former Air Force senior leader during the program’s development, who later served as the Under Secretary of Defense (Acquisition and Technology), stated that the program held monthly meetings between the functional managers and program management. The meeting participants were empowered to make decisions and did not need to seek approval from their senior leadership after the meetings. It was expected that any issues would be addressed with their leadership prior to the meetings. The frequent interactions reduced the need for reports, documents, and reviews. In another report, a former senior Air Force acquisition officer, who also served as program director for the F-117, reported that the ability to have a quicker process comes from pushing decision-making to the lowest levels without having to proceed up the chain of command for approval to implement decisions. DOD is using a more streamlined approach for some of its current classified programs that may have the potential to make the milestone decision process more efficient. A few classified programs we reviewed are managed with a process that includes fewer levels and reviewers between the program office and decision authority. the program manager reports to the program executive officer who reports directly to a Board of Directors comprised of the service acquisition executive, service secretary and chief, and the defense acquisition executive. The Board of Directors serves as the milestone decision authority for the programs. Decisions by the Board are unanimous agreements by all members. Leading up to the Board meeting, programs have separate, focused interactions with a small number of key functional offices as necessary. According to service officials, establishing this short, narrow chain of command allows for a more expedited decision-making process that requires less time and resources. Figure 8 shows the levels of review for the milestone decision process. Section 2430 of title 10, U.S. Code, specifically excludes highly sensitive classified programs (as determined by the Secretary of Defense) from the definition of a major defense acquisition programs. Therefore, statutes governing major defense acquisition programs generally do not apply to classified programs. Commercial companies we examined—Boeing, Caterpillar, Cummins, Honda, and Motorola Solutions—use a more streamlined process than DOD traditionally uses for its major defense acquisition programs. Companies prepare similar documents as DOD acquisition programs, but only a few of the most critical ones, the business case documents, require senior management approval. A key enabler to this approach is the establishment of frequent, regular interactions between program officials and decision makers. Companies minimize the levels of review needed to determine whether a program is ready to advance to the next acquisition phase, resulting in a quicker, more efficient process. The companies prepared documents similar to those of DOD such as development, test, engineering, and manufacturing plans. Officials at Motorola Solutions, Cummins, and Boeing stated that most documents are prepared and approved by functional managers assigned to the program office core team. Programs prepare an integrated document that summarizes key program information for the decision makers to review and approve. Figure 9 illustrates the levels at which documents are generally prepared and approved for commercial companies we visited. According to company officials, the integrated document may include information on customer requirements, resources, schedules, risks, technical data, and market launch plans. As part of the milestone decision process, program managers also provide evidence that the other program documents have been completed and approved by the appropriate official. For the companies we visited, ensuring that the program management team has a strong link to decision makers was a critical factor to their streamlined approach. Several companies held meetings between program officials and senior managers at frequent intervals to assess progress towards the next milestone decision. Officials stated that frequent, regular interactions enable senior managers to stay informed of program issues and plans, allowing the decision meeting to focus on making a well-informed decision, instead of spending time bringing decision makers up to date (see fig. 10). Cummins functional managers, for example, meet one-on-one with senior program managers on a monthly basis to review program progress. Officials stated that about 2 weeks prior to the decision meeting, a comprehensive review is conducted with each program’s functional area manager, supporting team, and senior functional manager to ensure required activities have been completed before a milestone review and the plan going forward is sound. Boeing officials stated that they conduct a series of monthly meetings between program functional area managers and senior managers to assess whether a program is meeting the criteria needed for moving into the next phase. According to officials, the results provide support for the milestone decision. Honda has established an environment that encourages frequent, direct interaction between program participants. Senior managers, program managers, and staff work in an open bullpen environment, rather than offices. This layout facilitates real-time discussions across organizational levels, multiple programs, and functional areas. Issues can be quickly discussed and resolved as they arise so only the most important ones need to be addressed at the milestone decision meetings. Companies we visited told us it typically takes only a few months or sometimes even a few weeks to complete the milestone decision process. The process for these companies included one or two levels of review to assess whether a program is ready to advance to the next phase. For example, Motorola Solutions and Cummins use a process in which programs proceed directly to the decision maker after they have packaged together information needed to support a milestone decision. Motorola Solutions program officials provided information to their decision makers about a week in advance of the decision meeting. During that week, program officials meet individually with principal members on the decision-making committee. The purpose of these meetings is not to present the program’s plans but to address any last minute concerns. According to Motorola officials, the decision meeting typically lasts about 30 minutes because issues are usually resolved in these earlier meetings. Boeing and Honda generally include one additional level of review. The commercial model, in which good program outcomes can be achieved with a more streamlined oversight process, includes a natural incentive that engenders efficient business practices. Market imperatives incentivize commercial stakeholders to keep a program on track to meet business goals. In addition, awards and incentives for managers are often tied to the company’s overall financial success. As a result, commercial managers are incentivized to raise issues early and seek help if needed. They know if the program fails, everyone involved fails because market opportunity is missed and business revenues will be impacted. Commercial product development cycle times are relatively short (less than 5 years), making it easier to minimize management turnover and to maintain accountability. DOD’s acquisitions occur in a different environment in which cycle times are long (10 to 15 years), management turnover is frequent, accountability is elusive, and cost and schedules are not constrained by market forces. Seen in this light, DOD must have an oversight process that substitutes discipline for commercial market incentives. Several industry officials stated that companies often add oversight levels or reviews as a first reaction after failures or problems occur. However, the officials further stated that this does not solve the root problems and often it makes the process less efficient. Two companies we visited highlighted an inspection-intensive oversight process they implemented as a deliberate attempt to address problems that had occurred but found that it led to an adversarial environment and an inefficient process. Both companies eventually abandoned this approach and replaced it with an approach where program officials are incentivized to reach out to recognized experts within the company for assistance when needed. Over time, DOD has essentially tried to overcome a legacy of negative cost and schedule weapon system program outcomes by requiring extensive documentation to support program strategies, plans, and other information prior to a milestone decision. Much of the information required in this documentation was added by policy as well as statute and these requirements likely represented legitimate reactions to problems. However, the consequence of this approach is that an extensive process has built up, in which program offices and other DOD organizations spend an enormous amount of time and effort preparing and reviewing documentation. Given the persistence of weapon system acquisition problems over decades, especially schedule delays and cost overruns, the effort involved with documenting and reviewing information requirements does not appear to correspond to the value gained. Programs we surveyed spent over 2 years completing information requirements that in some instances can be reviewed by as many as 56 organizations at eight levels. In the end, program officials felt almost half of these information requirements were not of high value. Further, program managers did not highly value the reviews by higher level DOD organizations for 90 percent of the documentation. The need to document information about essential aspects of a program and for an appropriate level of review and approval is legitimate. However, over time, the outcomes of weapon system programs have proven resistant to the oversight process. At the same time, the process has become bloated, time-consuming, and cumbersome to complete. The challenge is to find the right balance between having an effective oversight process and the competing demands such a process places on program management. Meeting the challenge will depend on DOD’s ability to identify the key problem areas in weapon system acquisitions and the associated root causes that exist today and whether information requirements and reviews are linked to addressing these problems. As we have noted in prior work, the most important information requirements—those that enable a program to establish a sound business case—include well-defined requirements, reasonable life-cycle cost estimates, and a knowledge-based acquisition plan. If information requirements and reviews are not clearly linked with the elements of a sound business case and/or the key issues facing acquisitions today, then they can be streamlined or even eliminated. If they are linked, but are not working well, then they warrant re-thinking. While the data support that change is needed, change does not mean weakening oversight, as unsatisfactory outcomes from acquisition programs may persist. Rather, the goal of change is to perform effective oversight more efficiently, and to recognize problems or incentives that require remedies and not just more information requirements. In this time of decreasing defense budgets, where every dollar spent on inefficient activities is one less dollar available for modernizing our future force, a close look at the review process is warranted to provide stakeholders needed information in a more efficient and cost effective manner. The surveys of DOD acquisition officials we conducted, the results of which are shown in figure 3 and appendix III, highlight information requirements that provide less than moderate value to acquisition officials. These requirements, as well as ones that take a year or more to complete, could serve as a starting point for discussions on what documentation is really needed for weapon acquisition programs and how to streamline the review process. Officials within the Office of the Secretary of Defense believe the Electronic Coordination Tool shows promise for reducing review times on documents. Currently, it is being used to reduce review times for acquisition strategies, and other documents may be added in the future. Automating the document review process, however, is relatively easy compared to potentially eliminating levels of review because that will require DOD to move away from its “checkers checking checkers” culture and make tough choices as to which levels of review do not add value and are not necessary. If DOD does not eliminate levels of review, inefficiencies are likely to continue. According to federal internal control standards, agencies should develop effective and efficient processes to ensure that actions are taken to address requirements, such as in this case, completing the required information to aid in milestone decisions. In other words, DOD should be striving to make its process more efficient. Selecting pilot test programs to experiment with streamlined acquisition processes, while capturing lessons learned from the pilot, would be steps in the right direction. The pilot programs could rely on practices used by some DOD classified programs and private industry companies we visited—namely, fewer information requirements and levels of review and more frequent interaction between the program office and actual decision makers. To help improve DOD’s milestone decision process, we recommend that the Secretary of Defense direct the Under Secretary of Defense for Acquisition, Technology and Logistics in collaboration with the military service acquisition executives, program executive officers, and program managers take the following two actions: In the near term, identify and potentially eliminate (1) reviews associated with information requirements, with a specific focus on reducing review levels that do not add value, and (2) information requirements that do not add value and are no longer needed. For the remaining reviews and information requirements, evaluate and determine different approaches, such as consolidating information requirements and delegating approval authority, which could provide for a more efficient milestone process. This effort should also include a re-examination of the reason(s) why an information requirement was originally considered necessary in order to determine what information is still needed and if a more efficient approach could be used. Findings and survey responses included in this report could be used as a starting point for this examination. As a longer-term effort, select several current or new major defense acquisition programs to pilot, on a broader scale, different approaches for streamlining the entire milestone decision process, with the results evaluated and reported for potential wider use. The pilot programs should consider the following: Defining the appropriate information needed to support milestone decisions while still ensuring program accountability and oversight. The information should be based on the business case principles needed for well-informed milestone decisions including well defined requirements, reasonable life-cycle cost estimates, and a knowledge-based acquisition plan. Developing an efficient process for providing this information to the milestone decision authority by (1) minimizing any reviews between the program office and the different functional staff offices within each chain of command level and (2) establishing frequent, regular interaction between the program office and milestone decision makers, in lieu of documentation reviews, to help expedite the process. DOD provided us with written comments on a draft of this report. DOD concurred with both of our recommendations. DOD’s comments are reprinted in appendix IV. DOD concurred with our first recommendation, indicating that the Department’s Better Buying Power initiative contains efforts to streamline documentation requirements and staff reviews and its recent (February 2015) set of legislative proposals to Congress seeks to reduce some DOD reporting requirements. We acknowledge these efforts as steps in the right direction. We believe DOD can and should do more to eliminate reviews and information requirements that do not add value and are no longer needed. For the most part, efforts to date have been limited in scope and have not yet had a significant impact on the amount of time and effort program offices spend on documentation required at milestones. The Under Secretary of Defense for Acquisition, Technology and Logistics acknowledged in April 2014 that DOD has not had significant success in eliminating unproductive processes and bureaucracy. We also note that DOD’s recent set of legislative proposals to Congress for inclusion into the National Defense Authorization Act for Fiscal Year 2016 primarily seek to reduce reporting requirements, but do not address streamlining the many levels of review. As we reported, a primary reason it takes over 2 years to complete the information required for a milestone decision is the large number of stakeholders that review the documents at the many organizational levels above the program office. While it will take a coordinated effort on the part of the Department, we believe DOD can reduce the many levels of review. DOD also concurred with our second recommendation, indicating that while not yet fully implemented, it has a Better Buying Power initiative to identify appropriate programs to pilot test a streamlined acquisition approach. As we reported, however, DOD has not yet identified candidate programs even though the initiative was proposed in April 2013 and was supposed to begin in July 2013. DOD officials told us it has been difficult to identify programs that meet the preconditions for the pilot set by the Under Secretary of Defense for Acquisition, Technology and Logistics— namely programs that have well defined requirements, a strong relationship with industry, and a highly qualified government team that can remain with the program until it is delivered. We encourage DOD to initiate the pilot, specifically on some current programs that have recently held a milestone B or will be approaching this milestone soon (e.g. Presidential Helicopter, Armored Multi-Purpose Vehicle, Joint Air-to- Ground Missile, Next Generation Jammer, Amphibious Combat Vehicle), as long as the aforementioned criterion of well defined requirements is considered. Almost two years have passed since the initiative was first proposed and even after DOD decides on the specific programs for the pilot, it will most likely be several years until lessons learned can be documented and potentially applied to other programs. We reiterate that, when implemented, each pilot should examine different approaches for streamlining the entire milestone decision process, including defining the appropriate information needed to support milestone decisions, such as business case principles like well defined requirements, reasonable life- cycle cost estimates, and a knowledge-based acquisition plan. Pilot programs should also strive to develop a more efficient process for providing this information to the milestone decision authority, which would most likely include minimizing reviews between the program office and the different functional staff offices within each chain of command level and establishing frequent, regular interaction between the program office and milestone decision makers, in lieu of documentation reviews. We are sending copies of this report to interested congressional committees; the Secretary of Defense; the Under Secretary of Defense for Acquisition, Technology and Logistics; and the Secretaries of the Air Force, Army, and Navy. This report also is available at no charge on GAO’s website at http://www.gao.gov. Should you or your staff have any questions on the matters covered in this report, please contact me at (202) 512-4841 or sullivanm@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix V. This report examines the Department of Defense’s (DOD) weapon system acquisition process. Specifically we examined (1) the effort and value involved in DOD’s preparation for a milestone decision, (2) the factors that influence the time needed to complete the milestone decision process, and (3) alternative processes used by some DOD programs and leading commercial firms. To determine the value and effort involved in DOD’s preparation for a milestone decision, we collected data from current and future major defense acquisition programs. First, we distributed two questionnaires by email in an attached Microsoft Word file asking program managers of current major defense acquisition programs to (1) provide a value for each information requirement applicable at either Milestone B or Milestone C; (2) provide a value for the review of each information requirement for either Milestone B or Milestone C; (3) provide the length of time required to develop each information requirement; (4) provide the number of staff days spent by the program office to develop each information requirement; (5) provide the length of time it took each information requirement to get through the review and approval process; and (6) identify the primary users and customers of each information requirement. One questionnaire was sent to 11 program managers of current major defense acquisition programs identified in the Defense Acquisition Management Information Retrieval system as having completed a Milestone B decision review since January 1, 2010, and a different questionnaire to 15 program managers of current major defense acquisition programs identified in the Defense Acquisition Management Information Retrieval system as having completed a Milestone C decision review since January 1, 2010. We received responses from 24 program managers, between July and October 2014, 11 program managers from Milestone B programs and 13 program managers from Milestone C programs. Because there is a slight variation in the number of information requirements applicable at Milestone B verses at Milestone C, in our analysis we excluded 2 information requirements applicable at Milestone C—the Capability Production Document and the General Equipment Valuation. We took a number of steps to ensure reliability of the data collected through our questionnaires, including reviewing responses to identify obvious errors or inconsistencies and conducting follow-up to clarify responses when needed. Second, in a separate data collection effort to determine the number of briefings and the length of time needed to complete the milestone decision process, we submitted questions for an electronic questionnaire distributed to 55 programs, as part of GAO’s Annual Weapons System Assessment. We asked programs if they had completed a milestone decision review as of January 1, 2011, and if yes, to provide additional information regarding that milestone review. Twenty-four programs out of 55 responded that they had completed a milestone decision review in that time frame; however, not all programs provided information on the review. Four programs were excluded from our analysis because they were unable to provide the additional data. Another 5 programs were excluded because we determined they have been designated as an Acquisition Category IC program; our analysis was limited to Acquisition Category ID programs. Of the 15 programs in our analysis, 11 are current programs and 4 are future programs. Our results are not intended to be generalizable and as such, results from nongeneralizable samples cannot be used to make inferences about all major defense acquisition programs. To better understand DOD’s milestone process, we selected 4 major defense acquisition programs to use as case studies to gain more in- depth knowledge about the milestone decision process: the Air Force’s F- 22 Increment 3.2B Modernization program, the Navy’s P-8A program; and the Army’s Joint Light Tactical Vehicle and Paladin Integrated Management programs. We used a data collection instrument to ensure we received similar information for all 4 case study programs in our review. We collected data on the number of briefings the program office held with program executive officers, service-level officials, and Office of the Secretary of Defense-level officials; the number of documents the program completed for the milestone decision review; and a timeline of their milestone decision review. In addition, we asked programs to provide detailed information related to the information requirements they prepared for the milestone, including the length of time spent documenting each information requirement; length of time it took the documentation to make it through the review process; the number of staff days the program office spent documenting each information requirement; and the cost to document each information requirement. We also reviewed milestone documents that programs prepared for the milestone in order to better understand what information is contained within the documents. Finally, we met with program officials from each case study program to obtain additional information on the milestone decision process. We selected our case studies based on input from officials with the military services using the criterion that the program had been through either Milestone B or Milestone C since January 1, 2010. Further, the programs we selected for review represent each of the military services. Two programs—F-22 Increment 3.2B Modernization and Joint Light Tactical Vehicle—completed a Milestone B review and 2 programs—P-8A and Paladin Integrated Management—completed a Milestone C review. While our sample of four case studies allowed us to learn about inefficiencies with the milestone decision process, it was designed to provide anecdotal information, not findings that would be representative of all of the department’s major defense acquisition programs. To determine the factors that influence the time needed to complete the milestone decision process, we met with officials and functional leaders and reviewed documents from several organizations within the Office of the Under Secretary of Defense for Acquisition, Technology and Logistics, including the Under Secretary. Specifically, we met with officials from the offices of (1) Acquisition Resources and Analysis; (2) Defense Procurement and Acquisition Policy; (3) Deputy Assistant Secretary of Defense for Systems Engineering; and (4) Deputy Assistant Secretary of Defense for Developmental Test and Evaluation. We also met with officials from the Office of the Director, Cost Assessment and Program Evaluation; and the Office of the Director, Operational Test and Evaluation. We also met with officials and reviewed documents from the military services, including the Department of the Air Force, Department of the Army, and the Department of the Navy, including the service acquisition executives. Within each military service, we also met with officials from functional offices including the (1) Air Force Director of Test and Evaluation; (2) Deputy Assistant Secretary of the Air Force for Science, Technology, and Engineering; (3) Deputy Assistant Secretary of the Navy for Research, Development, Test & Evaluation; (4) Deputy Under Secretary of the Army for Test and Evaluation; and the (5) Director of Army System of Systems Engineering and Integration. In order to capture the views of officials at the different levels involved in the milestone decision process, we also sent a questionnaire to program executive officers with responsibility for defense acquisition programs, all 3 military service acquisition executives, and 13 Office of the Secretary of Defense organizations identified as key stakeholders in the acquisition milestone decision process. We received responses from 25 program executive officers, all 3 military service acquisition executives, and 12 Office of the Secretary of Defense organizations. We analyzed the data provided by program managers, program executive officers, military service acquisition executives, and Office of the Secretary of Defense officials to determine the overall value of the milestone information requirements and the overall value of the review of the information requirements to the various groups involved in the milestone decision process. Our results are not intended to be generalizable and as such, results from nongeneralizable samples cannot be used to make inferences about all major defense acquisition programs. Further, we reviewed relevant statutes, DOD policies, and military service guidance for DOD acquisitions. To examine alternative processes used by some DOD programs, we reviewed the processes used by some current classified programs. We also reviewed reports and studies done by acquisition experts that examined past programs, including the F-117 and F-16, which successfully used a more streamlined process. In addition, we examined acquisition policies that were in place at the time of these programs development. To identify practices used by leading commercial firms that might be used to improve DOD’s acquisition process, we visited five companies to learn more about how they manage their product development processes. We selected these companies, in part, based on our previous GAO best practices work. These companies are recognized leaders in their industry, and are recognized for having successful, proven product development processes. The companies selected for use in our review include: Boeing, a leading aerospace company and a manufacturer of commercial jetliners. We met with officials and discussed their practices for managing the development of commercial aircraft in Seattle, Washington. Caterpillar Inc. (Caterpillar), a leading manufacturer of construction and mining equipment, diesel and natural gas engines, and industrial gas turbines. We met with officials in Peoria, Illinois. Cummins Inc. (Cummins), a leading manufacturer of diesel and natural gas-powered engines for on-highway and off-highway use. We met with officials in Columbus, Indiana. Honda of America Manufacturing, Inc. (Honda), a leading manufacturer of motorcycles and automobiles. We met with officials at their location in Raymond, Ohio. Motorola Solutions, a leading manufacturer of data capture devices such as professional and commercial radios and communication systems. We met with officials in Schaumburg, Illinois. At each company we discussed the new product development process from concept to full production; the methods, tools, measures and metrics used by leadership in monitoring and overseeing product development execution progress; and roles and responsibilities of the product development manager. We conducted this performance audit from January 2014 to February 2015 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Enclosure 1 of the Interim Department of Defense (DOD) Instruction 5000.02 identifies several information requirements which must be documented for the milestones of the DOD acquisition process, as well as the source of each requirement in statute, a DOD directive, instruction, and/or manual, or a regulation. Not all of the information requirements are applicable at every milestone and not all of the requirements equate to a separate document. Our review focused on documentation related to 49 statutory and policy information requirements that is expected to be completed at Milestone B, most of which is also expected at Milestone C. Of the 49 requirements applicable at Milestone B, 44 are also applicable at Milestone C. Two other information requirements are only applicable at Milestone C and not at Milestone B. We did not include these two requirements in our review, but they are listed below for a total of 51 requirements. The number of documents a program has to complete will vary depending on the type of program. For example, space programs have to complete an Orbital Debris Mitigation Risk Report, which is not required for non-space programs, and some requirements apply only to programs acquiring information technology. Description Memorandum reflecting the milestone decision authority’s certification, prior to granting milestone approval, as to certain program matters. Documents the decisions and direction resulting from each milestone and other major decision point reviews. Agreement between the milestone decision authority and the program manager and his/her acquisition chain of command that will be used for tracking and reporting for the life of a program or program increment; contains schedule, performance, and cost parameters that are the basis for satisfying an identified mission need. Describes the overall strategy for managing an acquisition program, including the program manager’s plan to achieve programmatic goals and summarizes the program planning and resulting program structure. Provides a design constraint on the product DOD will build, procure, and sustain based upon the budgets DOD expects to have for the product over its life cycle. Summarizes an analytical comparison of the operational effectiveness, suitability, and life- cycle cost (or total ownership cost, if applicable) of alternatives that satisfy established capability needs. Documents the bandwidth requirements needed to support a program and how they will be met. For bundled acquisitions, an analysis to determine the relative benefit to the government among two or more alternative procurement strategies and a determination of whether consolidation of the requirements is necessary and justified. Describes how to redesign the way work is done to improve performance in meeting the organization’s mission while reducing costs. Defines authoritative, measurable, and testable parameters across one or more increments of a material capability solution, by setting key performance parameters, key system attributes, and additional performance attributes necessary for the acquisition community to design and propose systems and to establish programmatic baselines. Provides authoritative, testable capability requirements, in terms of key performance parameters, key system attributes, and additional performance attributes, for the production and deployment phase of an acquisition program. For programs that acquire IT, documents compliance with the various requirements of the Clinger-Cohen Act of 1996, subtitle III of title 40, U.S. Code. For programs containing information technology, documents a program’s plan for ensuring cybersecurity. Promotes, monitors, and evaluates programs for the communication and exchange of technological data among defense research facilities, combatant commands, and other organizations involved in developing technological requirements for new items. Documents the contract type selected by the milestone decision authority for a major defense acquisition program that is consistent with the level of program risk. Ensures that opportunities to conduct cooperative research and development projects are considered at an early point during the formal development review process by indicating whether or not a project similar to the one under consideration by DOD is in development or production by another country or organization. Description Determination of whether the weapon system or military equipment being acquired is necessary to enable the armed forces to fulfill the strategic and contingency plans prepared by the Chairman of the Joint Chiefs of Staff. If the determination is positive, then an estimate of those core capability requirements and sustaining workloads are provided, organized by work breakdown structure and expressed in direct labor hours. Documents the plan to prevent and control corrosion from impacting the availability, cost, and safety of military equipment. Cost Analysis Requirements Description Describes formally an acquisition program for purposes of preparing both the DOD Component Cost Estimate and the cost assessment and program evaluation independent cost estimate. Cost analysis to support the Development Request for Proposal Release (RFP) decision point, which will vary depending on the program and information needed to support the decision to release the RFP. Cost analysis conducted by the service cost agency. Cost position established by the DOD component that is derived from the DOD Component Cost Estimate and the program office estimate per DOD component policy, and signed by the DOD component Deputy Assistant Secretary for Cost and Economics. System-specific criteria which normally track progress in important technical, schedule, or management risk areas. For systems that use the electromagnetic spectrum while operating in the United States and its possessions, a certification by the National Telecommunications and Information Administration (NTIA) that a candidate system conforms to the spectrum allocation scheme of the United States and its possessions. Certifies that the DOD component will fully fund the program to the DOD Component Cost Position (CCP) in the current Future Years Defense Program (FYDP), or will commit to full funding of the CCP during the preparation of the next FYDP, with identification of specific offsets to address any funding shortfalls that may exist in the current FYDP. Program description identifying contract-deliverable military equipment, non-military equipment, and other deliverable items and plans to ensure that all deliverable equipment requiring capitalization is serially identified and valued. Cost estimate covering the full life-cycle cost of a program including all costs of development, procurement, military construction, and operations and support, without regard to funding source or management control, prepared or approved by the Director of Cost Assessment and Program Evaluation. Analysis of a program’s supportability planning that assesses the program office’s product support strategy and how this strategy leads to successfully operating a system at an affordable cost. Analysis that the skills and knowledge, processes, facilities, and equipment necessary to design, develop, manufacture, repair, and support a program are available and affordable. Documents a program’s information-related needs in support of the operational and functional capabilities that the program either delivers or contributes. Documents a program’s strategy to identify and manage the full spectrum of intellectual property (IP) and related issues throughout the program’s life-cycle describing, at a minimum, how the program will assess program needs for, and acquire competitively whenever possible, the IP deliverables and associated license rights necessary for competitive and affordable acquisition and sustainment over the product life-cycle. Description Documents the program manager’s and product support manager’s plan for implementing item unique identification (IUID) as an integral activity within MIL-STD-130N item identification processes to identify and track applicable major end items and configuration- controlled items. For programs that are dependent on intelligence mission data, defines specific intelligence mission data requirements for a program and becomes more detailed as the system progresses towards initial operational capability. A living document describing a program manager’s approach and resources necessary to develop and integrate sustainment requirements into the systems design, development, testing and evaluation, fielding and operations. Documents the quantity of the product needed to provide production representative test articles for operational test and evaluation and efficient ramp up to full production. Provides out-year projections of active-duty and reserve end-strength, civilian full-time equivalents, and contractor support work-years for a major defense acquisition program. Provides information on, among other things, whether there are commercial off-the-shelf products that meet the defined requirements in the business case, could be modified to meet requirements, or could meet requirements when it is necessary to modify those requirements to a reasonable extent. Describes the operational tasks, events, durations, frequency and environment in which the materiel solution is expected to perform each mission and each phase of the mission. Assessment of debris generation risk during launch, on-orbit operations, and end-of-life disposal, and compliance with the U.S. Government Orbital Debris Mitigation Standard Practices. Documents the comprehensive approach to system security engineering analysis and the associated results to ensure that programs adequately protect their technology, components, and information throughout the acquisition process during design, development, delivery and sustainment. Describes the strategy for integrating environment, safety, and occupational health considerations into the systems engineering process, how they are managed, and how they are integrated with human systems integration efforts. For a program that will replace another program, documents the budget estimates required to sustain the existing system until the new system assumes responsibility; the milestone schedule for developing and fielding the new system; and an analysis of the ability of the existing system to maintain mission capability against relevant threats. Communicates government requirements to prospective contractors and solicits proposals; defines the government’s expectations in terms of the performance and functional specifications, program planning, program process, risks, and assumptions; and reflects the program’s plans articulated in the draft Acquisition Strategy and other draft, key planning documents such as the Systems Engineering Plan, Program Protection Plan, Test and Evaluation Master Plan, and Life-Cycle Sustainment Plan. Documents stretch goals for costs that DOD expects its leaders to do their best to reach, which are based on real opportunities, but challenging to execute. Documents the program manager’s plan for the use of small business innovation research and small business technology transfer program technologies and associated planned funding profile. Information requirement Spectrum Supportability Risk Assessment For spectrum-dependent systems, identifies and mitigates regulatory, technical, and operational spectrum supportability risks. Documents key technical risks, processes, resources, metrics, systems engineering products, and completed and scheduled system engineering activities to help the program manager develop, communicate, and manage the overall systems engineering approach that guides all technical activities of a program. Addresses projected adversary capabilities at system initial operating capability (IOC) and IOC plus 10 years; should be system specific, to the degree that the system definition is available at the time the assessment is being prepared. Assesses the maturity of, and the risk associated with, critical technologies, to assist in the determination of whether the technologies of a program have acceptable levels of risk, based in part on the degree to which they have been demonstrated, and to support risk- mitigation plans. Estimate of DOD’s potential liability if it terminates a contract for a program; the estimate must include how such termination liability is likely to increase or decrease over the period of performance. Documents the overall structure and objectives of the test and evaluation program. It provides a framework within which to generate detailed test and evaluation plans and documents schedule and resource implications associated with the test and evaluation program. Michael J. Sullivan, (202) 512-4841 or sullivanm@gao.gov. In addition to the contact named above, Cheryl K. Andrew, Assistant Director; Don M. Springman, Analyst-in-Charge; Julie C. Hadley; Matthew B. Lea; Brian T. Smith; Kristine R. Hassinger; Kenneth E. Patton; Laura S. Greifner; Oziel A. Trevino; and Nathaniel O. Vaught made key contributions to this report.
DOD has long sought to improve the efficiency of its weapon system acquisition process, including the time and effort needed to complete the milestone decision process. The National Defense Authorization Act for Fiscal Year 2014 mandated GAO to review DOD's weapon system acquisition process. This report examines (1) the effort and value involved in the preparation for a milestone decision; (2) factors that influence the time needed to complete the milestone decision process; and (3) alternative processes used by some DOD programs and leading commercial firms. To perform this work, GAO examined the levels of review and information requirements that are part of DOD's process. GAO surveyed 24 program managers and 40 other DOD officials on the value and the time to complete milestone documentation. For 15 program offices, we gathered data on the time to complete the entire milestone decision process. GAO discussed with DOD officials the factors that lead to inefficiencies. GAO also examined practices used by some classified DOD programs and five commercial firms generally recognized as leaders in product development. The acquisition programs GAO surveyed spent, on average, over 2 years completing numerous information requirements for their most recent milestone decision, yet acquisition officials considered only about half of the requirements as high value. The requirements, in total, averaged 5,600 staff days to document. The Department of Defense's (DOD) review process is a key factor that influences the time needed to complete information requirements. The process in some instances can include up to 56 organizations at 8 levels and accounts for about half of the time needed to complete information requirements. Most program managers felt that these reviews added high value to only 10 percent of the documents. DOD's F-16 aircraft program, some classified programs, and five commercial firms GAO visited use streamlined processes with fewer documents and reviews and offer alternatives to the traditional DOD process. Establishing an efficient process for documentation and oversight is a key internal control to avoid wasteful spending. The challenge is to find the right balance between effective oversight and the competing demands on programs. DOD, however, has not yet identified ways to achieve the right balance by minimizing the time spent on information requirements and reviews that contribute to its inefficient milestone decision process. GAO recommends that DOD identify and potentially eliminate reviews and information requirements that are no longer needed and select programs to pilot more streamlined approaches to provide only the most essential information to decision makers. DOD concurred with both recommendations.
The WIC program provides eligible women, infants, and children with nutritious foods to supplement their diets, nutrition education, and referrals to health care. FNS administers the program through a federal/state partnership in which FNS makes funds available in the form of grants to WIC agencies. FNS establishes regulations for the program, including the cost containment aspects, and provides guidance to the agencies. To measure overall compliance with program requirements, FNS regional offices conduct management evaluations at state-level WIC and local agencies. Each WIC agency is responsible for developing guidelines to ensure that WIC benefits are effectively delivered to eligible participants. WIC grants cover the costs of food grants, nutrition services, and administration. Food grants are allocated to the WIC agencies through a formula that is based on the number of individuals in each state who are potentially eligible for WIC benefits. Nutrition services and administration grants are allocated to the agencies through a formula that considers factors such as an agency’s number of projected program participants and a salary differential for local government employees. In fiscal year 2001, FNS provided $4.1 billion in grants to WIC agencies to fund all benefits and services, of which about $3.0 billion was for supplemental food, including formula. On average, the program had about 7.3 million participants each month, including 1.9 million infants. WIC is a discretionary grant program for which the Congress authorizes a specific amount of funds each year, not an entitlement program. Therefore, eligible individuals can enroll in the program only to the extent that funds are available. FNS estimated that about 47 percent of all babies born in the United States were served by WIC in fiscal year 2001. FNS also estimated that about 19 percent of all potentially eligible women, infants, and children were not participating in the program. At the state level, the program is administered through 88 state-level WIC agencies and a network of over 2,000 local agencies. Eligible participants include pregnant or postpartum and breastfeeding women, infants, and children up to age five who meet income guidelines, a state residency requirement, and are individually determined to be at “nutritional risk” by a health professional. The two major types of nutritional risk are (1) medical-based risks such as anemic or underweight infants, maternal age, history of pregnancy complications, or poor pregnancy outcomes and (2) diet-based risks such as an inadequate diet pattern. Infants are among those given highest priority for receiving WIC benefits of those who have medical-based nutritional risk conditions. Infants with dietary risk are lower priority than medically at risk infants. For the first 6 months of life, breast milk or infant formula is the primary food in a baby’s diet. WIC promotes breastfeeding as the best choice for meeting an infant’s nutritional needs, but it also provides infant formula to those who prefer to use it exclusively or as a supplement to their breastfeeding. About half of all infant formula sold in the country is purchased through the WIC program. As defined in the Federal Food, Drug, and Cosmetic Act, infant formula means a food that “purports to be or is represented for special dietary use solely as a food for infants by reason of its simulation of human milk or its suitability as a complete or partial substitute for human milk.” Commercially available infant formulas can be described in two broad categories: standard and nonstandard. (See fig. 1.) Standard infant formula includes milk-based and soy-based infant formulas that meet the nutritional needs of most full-term healthy infants less than one year old. The Food and Drug Administration strictly regulates the content and quality of standard infant formula for all brands. Therefore, all brands of standard formula are nutritionally identical. In this report, we use two categories of standard infant formula—contract and noncontract. Contract standard formula is any standard infant formula that is provided to WIC participants for which a WIC agency receives a rebate based on its contractual arrangement with an infant formula manufacturer. Noncontract standard formula is any standard infant formula that is not eligible for a rebate from an infant formula manufacturer. Nonstandard formula, as we use the term, is any formula that is not contract standard or noncontract standard and that is designed to meet various medical and dietary needs of infants that standard formulas will not satisfy. This includes “exempt” formulas, which are defined in the Federal Food, Drug, and Cosmetic Act as any infant formula which is represented and labeled for use by an infant who has an inborn error of metabolism or a low birth weight, or who otherwise has an unusual medical or dietary problem, and other specialized but nonexempt infant formulas classified as WIC eligible medical foods, which are specifically formulated to provide nutritional support for infants with a diagnosed medical condition when the use of conventional foods is precluded, restricted, or inadequate. Since 1989, WIC agencies have been required by law to implement measures to contain the cost of infant formula. In most instances, this means a state-level agency agrees, through a competitive contract awarded to one manufacturer, to provide and deliver one brand of standard infant formula to its participants through the existing retail outlet system and in return receives money back, called a rebate, from the manufacturer for each can of standard infant formula that is purchased by WIC participants at retail stores. Rebates are not received for noncontract standard formula and nonstandard infant formula, which is not covered by rebate contracts as reported by the WIC agencies responding to our survey. Most WIC infant formula participants receive vouchers that they use to purchase the contract standard infant formula at authorized retailers. The WIC agency then reimburses the retailer for the full retail price of the infant formula. The WIC agency or its financial institution then obtains a reimbursement from the manufacturer for the rebate agreed to in the contract. As a result, the actual cost of infant formula to the WIC program equals the retail cost minus the amount of the manufacturer’s rebate. FNS policy requires that during the grant year, any savings from cost containment are to be used to provide food benefits to additional WIC participants. Even though a state-level WIC agency contracts to provide only one brand of standard infant formula, federal WIC regulations permit the issuance of noncontract standard formula provided medical documentation is obtained or a religious reason is offered to justify its use for individual participants. Medical documentation must be provided by a licensed health care professional authorized to write medical prescriptions under state law. According to regulations, there is just one exception to the medical documentation requirement: noncontract standard brand infant formulas may be issued without medical documentation to accommodate religious eating patterns, such as the Judaic requirement for kosher infant formulas. However, between February 2000 and February 2002, the three infant formula manufacturers that WIC agencies used for their formula rebate contracting (Mead Johnson, Ross, and Carnation) each provided a soy-based, kosher infant formula, which minimizes the need for agencies to provide noncontract standard formulas to accommodate Jewish infants’ religious eating patterns. Because WIC agencies pay the retail price but do not receive rebates for noncontract standard formula, an increase in the use of this formula will increase a WIC agency’s total net payments for infant formula. Table 1 shows an example of the effect rebates had on the net cost of contract and noncontract standard formula in the state of Washington in April 2002. As table 1 indicates, even though the retail cost of contact standard formula and noncontract standard formula may be similar, rebates equal to 80 percent or more of the average retail cost of contract formula can lower its net cost for the WIC agency to 20 percent of the cost of noncontract standard formula. The 51 WIC agencies we surveyed all set some sort of restrictions designed to limit the amount of noncontract standard infant formula provided under WIC. (See table 2.) The approach used by 48 WIC agencies in February 2002 was to adopt the restrictions contained in federal regulation, which limit the use of noncontract standard formula to certain specific situations, such as if medically prescribed or if needed for religious reasons. Seven of the 48 agencies also set quantitative limits on the amount of noncontract standard formula allowed. Three other agencies were even more restrictive and prohibited noncontract standard formula use entirely. The 7 agencies that set quantitative limits on the use of noncontract standard formula all differed to some degree in their approach, with the maximum limit for noncontract formula usually set at 2 to 4 percent of all infant formula or all standard infant formula issued. (See table 3.) For example, the Oregon agency has two maximum usage rates for local agencies: 4 percent for noncontract standard cow’s milk-based formula and 8 percent for noncontract standard soy-based formula; and the Louisiana agency requires that 96 percent of all standard formula be contract formula which, in effect, sets the limit for noncontract standard formula at 4 percent. The Mississippi, New Mexico, Tennessee, and Virginia WIC agencies all had policies prohibiting the use of noncontract standard formula and did not issue any such formula in February of 2002. New Mexico and Tennessee had such a policy in place since before February 2000, while Virginia’s policy took effect in July 2001. In addition to these 3 WIC agencies, Alabama and Pennsylvania both implemented policies prohibiting the issuance of noncontract standard formula in March 2002, although Alabama allowed WIC infants already receiving a noncontract standard formula to continue doing so and Pennsylvania allowed existing vouchers for noncontract standard formula to be used. The directors of the Alabama and Pennsylvania WIC agencies told us that the overall implementation of the prohibition on noncontract standard formula had gone smoothly and there were few complaints from WIC participants. To obtain perspective from other states about a policy that would prohibit the use of noncontract standard formula altogether, we asked officials of the 4 WIC agencies providing formula to the largest number of infants (California, Florida, New York, and Texas) whether they had considered instituting a policy of prohibiting the issuance of noncontract standard formula without exception, and what the overall effect of such a policy would be on WIC participants in their states. Three (California, Florida, and Texas) responded that their agencies had considered prohibiting the issuance of noncontract standard formula but had decided not to do so. Generally, the Texas and Florida agencies stated that if they prohibited the use of noncontract standard formula the likely effect on infants receiving noncontract standard formula would be (1) the larger portion of parents of these infants would ask their doctors to prescribe nonstandard formulas, which could cost the agency more than the noncontract standard formula, (2) some parents would remove their infants from the WIC program; and (3) few or no infants would be switched to the contract standard formula. California WIC agency officials said that projecting the impact on WIC families of prohibiting noncontract standard formula is speculative, but that some families would probably switch to a contract standard formula, others might drop out of the program, and some participants might ask their doctor to put the infant on a more expensive nonstandard formula. The New York WIC agency had not considered a policy of prohibiting the use of noncontract standard formula. However, an agency official believed such a prohibition would cause a majority of users of noncontract standard formula to either switch to contract standard formula or seek another party to pay for noncontract standard formula, such as U.S. Department of Agriculture’s Commodity Supplemental Food Program, Medicaid or food banks. The official does not believe that prohibiting noncontract standard formula would lead to an increase in requests for nonstandard formula. Nationally, 3.3 percent of WIC infants using formula received noncontract standard formula in February 2002, according to usage data reported by 45 WIC agencies that had these data. By comparison, 90.3 percent of all infants received contract standard formula, while 6.4 percent received nonstandard formulas, which are special formulas for infants who cannot use standard formula. (See fig.2.) There was substantial variation in these percentages from agency to agency. The 3 agencies with the most restrictive policies that prohibited the use of noncontract standard formula reported they did not use any of this formula. Seven agencies that established quantitative limits on noncontract standard formula use had mixed success in staying within their limits. Four of the 7 agencies that set the highest limits stayed within their limits while the 3 agencies with the lowest established limits exceeded their limits. Also, the 7 agencies, on average, issued a somewhat greater portion of noncontract standard formula than did the remaining 35 agencies that only restricted its use to specific situations. Officials at selected WIC agencies reported that the use of noncontract standard formula for religious reasons was very limited. The percentage of WIC infants receiving noncontract standard formula in February 2002 ranged from a low of zero to a high of 10.5 percent, as reported by the 45 agencies that provided this information. (See table 4.) Four agencies (New Mexico, Tennessee, Virginia, and the Navajo Nation) reported issuing no noncontract standard formula in February 2002. Three other agencies reported rates of less than 1 percent: Arkansas, Maryland, and Georgia reported rates of 0.04, 0.6, and 0.7 percent, respectively. At the other end of the spectrum, Utah issued vouchers for noncontract standard formula to 8.5 percent of all WIC infants, Puerto Rico to 8.9 percent, and Wyoming to 10.5 percent. However, Wyoming and Utah are 2 of the smaller agencies in terms of number of WIC infants served, so despite the high percentage figure, the number of infants issued vouchers for noncontract standard formula by these agencies is relatively small compared to other larger WIC agencies. The variation in the percentage of infants who received nonstandard formula was even greater than the percentage that received noncontract standard formula. The use of nonstandard formula ranged from 0.2 percent of all infants receiving WIC formula in Nevada and 0.9 percent in the District of Columbia to 27.7 percent in Puerto Rico and 19.9 percent in Ohio. Appendix II shows the number of infants using each type of formula, by agency. Our survey was designed to gather basic information about noncontract standard formula usage in the absence of any available information on this issue. FNS is not routinely collecting from WIC agencies the data that would allow it to monitor the effectiveness of these agencies in restricting the use of noncontract standard formula. To provide some perspective on why there was so much variation in noncontract standard formula usage rates, we contacted certain agencies, especially those with the lowest percentage usage and those with the largest programs. For agencies with the lowest percentage of infants receiving noncontract standard formula, the restrictiveness of the agency policy with regard to noncontract formula is clearly a factor. Three of the 4 agencies reporting zero usage (New Mexico, Tennessee, and Virginia) had policies in place prohibiting the use of noncontract standard formula with no exceptions. The 4 largest of the 48 agencies that allowed the use of noncontract standard formula in specific situations (California, Florida, New York, and Texas) varied considerably in the percentage of infants who received this formula. Two of them, Texas and New York, issued vouchers for noncontract standard formula to a smaller percentage of infants than the average of 3.3 percent for all 45 agencies. Texas’s percentage was 1.4 percent, while New York’s was 2.3 percent. Texas and New York pointed to policies and practices they regarded as restrictive as the reason for their relatively low percentages. Officials at the Texas agency said their practice for issuing vouchers for noncontract standard formula was restrictive enough that they were a little concerned it may have shifted some infants into nonstandard formula, which is more expensive than noncontract standard formula. However, Texas’s rate of 3.3 percent for nonstandard formula was also lower than the average reported by all agencies (6.4 percent). A New York agency official said the agency restricts the approval of certain noncontract standard formulas and that is tantamount to prohibiting the issuance of those particular formulas. California and Florida, by contrast, reported noncontract standard rates that were above the national average of 3.3 percent: California’s rate was 4.6 percent, while Florida’s was 5.9 percent. Our discussions with agency officials about the possible reasons for their relatively high rates showed that the factors contributing to such rates might vary considerably from agency to agency. In California, for example, agency officials said they grapple on a continuing basis with responding to parental requests for noncontract standard formula because the infant received noncontract standard formula in the hospital at birth. California officials have drafted a new policy, which they designed to limit the use of noncontract standard formula. Florida officials said the use of noncontract standard formula in their state, which had historically been less than 3 percent, increased when a different manufacturer became the contract supplier. Florida’s experience is discussed in more detail later in this report. The 3 agencies that set a low quantitative limit (2 or 3 percent of all formula used) on the use of noncontract standard formula exceeded that limit in February 2002. However, the 4 agencies that set a higher limit (4 percent) stayed below that limit. On average, the 7 agencies with policies setting quantitative limits actually issued a somewhat greater portion of noncontract standard formula (4.0 percent of all formula issued) than did 35 WIC agencies that also granted exceptions but did not set quantitative limits (3.3 percent). (See table 5.) It does not appear that a substantial amount of the noncontract standard formula is issued for religious reasons. Religious concerns about contract standard formula mainly involved the brands manufactured by a company, whose formula contained ingredients or involved manufacturing processes that did not meet some groups’ requirements. We contacted all five agencies that had contracts with the company as of February 2002, and officials from four of the five said they issued small amounts of noncontract standard formula for religious reasons. For example, in New Jersey, where the rate of noncontract formula is 1.4 percent, an agency official said all of the noncontract standard formula was issued for Orthodox Jewish infants whose parents do not find the soy-based, kosher contract standard formula provided by the New Jersey agency to be manufactured to strict enough standards to be acceptable. The agency permits the issuance of noncontract standard soy-based, kosher formula, which is made by other manufacturers and is acceptable to Orthodox Jewish parents. The Kentucky WIC agency also issued a small amount on noncontract standard formula to meet the kosher requirements of some Jewish parents. Similarly, officials from the Florida and North Dakota agencies said a very few Muslim participants received noncontract standard formula because they find a pork enzyme used in the manufacture of the milk-based contract standard formula to be unacceptable and are unable or not required to use the soy-based standard contract formula which does not contain the pork enzyme. We contacted 5 other agencies (Alabama, New Mexico, New York, Pennsylvania, and Tennessee) that had contracts with other manufacturers, and none of them reported issuing any noncontract standard formula for religious reasons. We found no research that directly addressed the question of whether normal, healthy infants are adversely affected by switching to a different standard formula brand, and no research that directly addressed whether infants exhibit a strong preference for the first standard formula they use. The studies we identified addressed such things as whether stool characteristics changed as a result of changing formula, but they did not note any adverse effects from making the switch. In the past, FNS has also studied the issue of switching between standard formulas and found no scientific evidence to support the need for a gradual rather than immediate switch. However, some WIC agencies report that when a switch in contract standard formula occurs, use of noncontract standard formula rises. Thirty-two of the WIC agencies we surveyed had entered into new contracts resulting in a change of infant formula manufacturer and of contract standard formula brand, and of these, 7 (22 percent) reported that an increase in noncontract standard formula use occurred after changing contract standard formula brands. We identified two industry-sponsored studies that addressed how infants are affected by switching between brands of standard formula. These studies were “Formula Tolerance in Postbreastfed and Exclusively Formula-fed Infants” and “Effect of Infant Formula on Stool Characteristics of Young Infants.” Two of the 51 agencies also informed us of these studies. The two studies did not disclose any adverse affect for normal, healthy infants from switching to a different brand of standard formula but did note differences in such things as stool characteristics from switching to a different formula brand. The first article, supported by Ross Products Division, attempted to measure infant tolerance in two standard milk-based formulas, Ross’s Similac with iron powder and Mead Johnson’s Enfamil with iron powder. Included were healthy, full-term infants, who were either initially breastfed in one group or initially formula-fed Similac in another group. In both groups, the results of intolerance measures, such as the volume of formula intake, weight gain, and incidence of spit-up or vomit did not differ between formulas. However, differences were observed in stool characteristics, such as color, firmness, and frequency. The study concluded that one brand of formula produced stool characteristics closer to that of infants who feed on breast milk, and it made no mention of stool differences being adverse to an infant’s health. The second article, supported by Mead Johnson Nutritionals, investigated the relationship among four types of Mead Johnson formulas (Enfamil, Enfamil with Iron, ProSobee, and Nutramigen) consumed and the stooling characteristics and gastrointestinal symptoms of young infants. Among formula groups tested, there were variations in stool frequency, consistency, and color. However, no significant differences were noted in the severity of spitting, gas, and crying between the four formula groups. The study concluded that although true hypersensitivity to cow’s milk or soy protein may occur, it is uncommon and many infants are often mislabeled as being “allergic” to a particular formula when their symptoms such as loose stools, gas, spitting, and crying probably fall within the normal range of variability observed with all infant formulas. The study stressed the importance of parental education in the interpretation of stooling patterns and gastrointestinal symptoms during the administration of various infant formulas, and it made no mention of differences in stool characteristics being adverse to an infant’s health. FNS headquarters officials also were not aware of any research concluding that infants show a strong preference for the first standard formula used. However, FNS pointed out that because WIC state agencies typically renegotiate rebate contracts every few years, many of the infants they serve are required to switch from receiving one brand of standard infant formula to another. And on occasion, parents and caretakers complained that their infants experienced problems tolerating the new brand of formula and requested a noncontract standard substitute. Because this situation has raised concern within the WIC community, in 1995 FNS explored whether scientific evidence exists to support the suggestion that a change of standard formula should be gradually introduced into an infant’s diet. FNS wanted to ascertain whether a specific amount of time was needed to wean an infant from one formula to another and if a particular proportion of old-to-new formula was recommended. In its research of this issue, FNS contacted the American Academy of Pediatrics and the Infant Formula Council to solicit their advice and recommendations on the proper methods to use when introducing an infant to a change in formula. FNS reported that the American Academy of Pediatrics stated “scientific literature does not reveal any compelling evidence for adopting a guideline suggesting the delayed introduction of infant formula products for well babies.” Although the Infant Formula Council did not directly reply to FNS’s inquiry, FNS reported that one of the council’s members, Ross Products Division of Abbott Laboratories, sent a letter stating that its staff physicians and researchers also concluded “no scientific evidence or formal guidelines exist concerning the introduction of a formula change.” As a result of its inquiry, FNS sent a letter in June 1995, to FNS Regional Directors which stated that FNS was “unaware of a medical basis for recommending any particular procedures or methods which should be routinely followed when a well WIC infant is switched from one standard infant formula to another.” Also, in August 2001, in responding to Senator Leahy regarding WIC’s issuance of noncontract standard formula, FNS stated that almost all infants, except those that are exclusively breastfed, can be issued contract standard infant formula without compromising an infant’s nutritional needs and that noncontract standard formula should only be issued in exceptional situations. Considering the possibility that changing infant formula manufacturers might lead to an increase in the use of noncontract standard formula, we asked the WIC agencies we surveyed to consider how their most recent change to a different infant formula manufacturer affected their use of noncontract standard infant formula. Most agencies that had switched between brands of standard formula for their rebate contract indicated that the change had not been accompanied by an increase in noncontract standard formula. In all, 32 of the WIC agencies we surveyed had made such a change, and 25 of them (78 percent) said the use of noncontract standard formula had not increased after their most recent contract change to a different infant formula manufacturer. We did not follow up with all of the 7 other agencies that reported an increase, but 1 of the 7 (Florida) was among the largest agencies where we focused part of our follow-up work. A state agency official said that use of noncontract formula had traditionally been less than 3 percent of all formula issued until February 1999, when the Florida WIC agency switched its contract to a new infant formula manufacturer. The official cited several reasons for the increase in noncontract standard formula use after changing contractors. For example, some hospitals were not using the new contractor’s products, so infants not exclusively breastfed were started out on a noncontract formula rather than a contract formula. In addition, the new contractor did not initially market its products to health care professionals in Florida. However, Florida’s use of noncontract standard formula has declined from 10.1 percent of all infants issued WIC formula in February 2000 to 8.6 percent in February 2001 and 5.9 percent in February 2002. In October 2002, the Florida agency official informed us that there had been a steady decline in requests for noncontract standard formulas since the new contractor deployed a medical marketing team in Florida. He said the team had good success in some areas in gaining physician acceptance and in persuading hospitals to provide their products in nurseries to newborns and in pediatric units to infants who may participate in the WIC program, although there were still some large hospitals that did not offer the new contractor’s formulas. Using February 2002 data, we estimated that the use of noncontract standard infant formula cost the WIC program $50.9 million annually in lost rebates, an amount equal to about 3.7 percent of the rebates actually received. This calculation assumes all infants using noncontract standard formula would instead use contract standard formula. Each WIC infant using noncontract standard formula instead of contract standard formula results in the agency foregoing the rebate from the infant formula manufacturer. For February 2002, the sum of infant formula rebates foregone by the 47 WIC agencies that provided data was an estimated $4.25 million. Assuming that February’s total is representative of months throughout the year, the annual total is an estimated $50.9 million. Assuming the retail price of contract standard and noncontract standard infant formula is the same, the foregone rebate is also the net cost to the WIC agency. Amounts foregone for February 2002 ranged from zero at the 4 WIC agencies that reported issuing no noncontract standard formula to $781,370 for California, the largest WIC agency. (See appendix III for an estimate of rebates foregone in February 2002 by each of 47 WIC agencies; see appendix I for a description of the method we used to estimate the amount of rebate dollars lost.) Six WIC agencies—California, Florida, New York, Pennsylvania, Puerto Rico, and Texas—accounted for over half of the estimated infant formula rebates lost in 2002. All were among the 9 largest agencies in terms of the number of infants provided infant formula. These agencies, however, did not necessarily have above average percentages of infants receiving noncontract standard formula. For example, as a percentage of all WIC infants issued formula, Texas issued noncontract standard formula to only 1.4 percent of infants and New York to 2.3 percent of infants in February 2002. Nevertheless, the sheer size of their programs meant that even a below average percentage of infants issued noncontract standard formula could result in a substantial amount of rebates being foregone. Six WIC state agencies—Alabama, Mississippi, New Mexico, Pennsylvania, Tennessee, and Virginia—have implemented policies prohibiting the use of noncontract standard formula entirely. Some state agencies may have medical or dietary religious reasons for not entirely prohibiting the use of noncontract standard formula. However, an opportunity exists for agencies with higher-than-average usage rates to lower their use of noncontract standard formula, thereby increasing rebates. If the 19 agencies with higher-than-average noncontract standard use were able to lower their usage rates to 3.3 percent (the average for 45 WIC agencies in 2002) rebates could have been increased by an estimated $13.8 million in 2002 (about 1 percent of annual rebate savings). These rebates could have been used to provide additional program benefits to women, infants, and children. (See appendix IV for an estimate of rebates foregone by each of 19 WIC agencies due to noncontract standard formula use in excess of 3.3 percent of all formula issued in February 2002; see appendix I for a description of the method we used to estimate the amount of these rebate dollars foregone.) Knowing the reasons for the widely varying usage rates among the WIC agencies for nonstandard infant formula could also provide an opportunity to lower the usage rate of the higher costing formula and result in cost savings. FNS is not routinely collecting from WIC agencies the data that would allow it to monitor the effectiveness of WIC agencies in restricting the use of nonstandard infant formula. As shown in table 4, the usage rate reported by the 45 WIC agencies for nonstandard infant formula varied significantly. We did not examine the cause of this variation because our study focused on the use and cost of noncontract standard formula. However, the usage rate reported for nonstandard formula (6.4 percent) is nearly double that of noncontract standard formula, and nonstandard formula can be, on average, twice as expensive as noncontract standard formula. For example, nonstandard formula issued in Montgomery County, Ohio in December 2001 cost, on average, $19.00 per can compared to $9.48 per can for noncontract standard formula. If this cost differential exists nationally, agencies may be spending nearly four times as much on nonstandard formula as they are on noncontract standard formula. Potential topics on which to focus future studies of cost savings opportunities in the WIC program may thus include examining why nonstandard formula use varied so widely between WIC agencies, and what policies and practices were used by agencies that kept their use of nonstandard formula at below-average levels. Federal law requires WIC state agencies to contain the cost of purchasing infant formula. In fiscal year 2001, FNS received $1.4 billion in rebates from the use of contract standard formula by infants participating in the WIC program. The $1.4 billion permitted FNS and the WIC agencies to provide WIC benefits to about 2.0 million additional participants. In February 2002, we found that 3.3 percent of infants received noncontract standard formula and 6.4 percent received nonstandard infant formulas for which there were no rebates. FNS has stated that almost all healthy infants, except those that are exclusively breastfed, can be issued contract standard infant formula without compromising an infant’s nutritional needs and that noncontract standard formula should only be issued in exceptional situations. Six state-level WIC agencies that we contacted have found it feasible to prohibit noncontract standard formula entirely. FNS is not routinely collecting from WIC agencies the data that would allow it to monitor the effectiveness of WIC agencies in restricting the use of noncontract standard or nonstandard infant formula. The wide variation among WIC agencies in the percentage of noncontract standard formula used suggests that there is potential for the WIC agencies with above- average usage to reduce their use of noncontract standard formula and thereby increase rebates received from infant formula manufacturers. For example, if the 19 WIC state agencies with above-average usage had been able to reduce their noncontract standard usage to the average of 3.3 percent reported in February 2002, infant formula rebates would have been an estimated $13.8 million greater in 2002, which would have allowed the program to serve additional participants. Beyond the issue of noncontract standard formula use, we observed wide variations in the use of nonstandard formulas—those special formulas for infants whose health or dietary needs cannot be met through standard formulas. The usage rates reported by WIC agencies are nearly twice as great and vary even more for nonstandard formulas than for noncontract standard formula, and nonstandard formulas can be much more expensive. To effectively monitor the economical purchase of infant formula, we recommend the Secretary of Agriculture direct the Administrator of the Food and Nutrition Service to (1) require that WIC agencies develop and regularly submit data on their use of noncontract standard infant formula, and (2) work with WIC agencies with above-average usage rates of noncontract standard formula to implement the best policies and practices for reducing the level of use. Additionally, the Administrator should (1) require that WIC agencies develop and regularly submit data on their use of nonstandard formula, and (2) work with WIC agencies with above- average use of nonstandard formula to implement the best policies and practices for reducing nonstandard formula use. We provided a draft of this report to the Department of Agriculture. FNS provided a written response, which is included as appendix V of this report. In addition, FNS provided technical comments, which we incorporated where appropriate. In its letter, FNS agreed with the recommendations in the report and stated that it had recently started collecting data that will facilitate the implementation of the recommendations. However, FNS expressed concern that GAO’s survey instrument may have been misinterpreted by WIC state agencies because we used terms to describe types of infant formula that are different from FNS’s terms. FNS believes this difference in terminology, and in particular our use of the term nonstandard formula, may have resulted in WIC state agencies’ overreporting the volume of nonrebated, nonstandard infant formula purchased by WIC participants. We used the term “nonstandard formula” in our report because we wanted to capture the different types of special formulas for which states did not receive rebates, and this term encompassed all the types of special formula not under contract that the WIC agencies used and reported to us in our infant formula survey. Our definition of nonstandard formula includes both the Food and Drug Administration exempt and the special nonexempt formulas that the WIC agencies provided, neither of which were covered by a rebate contract as reported by the states. We do not believe that the WIC agencies had difficulty interpreting our survey terms. We pretested our survey with officials in three states, which included a discussion of their understanding of the definitions we employed. In addition, after our preliminary analysis of survey responses, we contacted officials in four WIC agencies with particularly high usage of nonstandard formula to verify the correctness of the data they had provided. In three of the four instances, state officials chose not to make any changes to the data. Although one of the agencies adjusted their nonstandard formula usage downward, the adjustment was not required due to difficulty in interpreting our infant formula descriptions, but rather was because agency officials neglected to subtract exclusively breastfed infants in their reported data. Despite these efforts, it is possible that the amount of nonstandard formula use reported by some WIC agencies included the use of nonexempt infant formulas that should have been covered by the agencies’ infant formula rebate contracts. Whether such instances occurred cannot be determined from our survey data. However, if such instances did occur, as FNS believes, this only reinforces the importance of our recommendation that FNS effectively monitor the use of both noncontract standard and nonstandard formulas, including those that are categorized as nonexempt and exempt. Such monitoring would help to identify any nonstandard, nonexempt formulas manufactured by a WIC agency’s rebate contractor that should be covered by the agency’s rebate contract but are not. We are sending copies of this report to the Honorable Ann M. Veneman, Secretary of Agriculture; Roberto Salazar, FNS Administrator; appropriate congressional committees; and other interested parties. Please call me at (202) 512-7215 if you or your staffs have any questions about this report. Key contacts and staff acknowledgements for this report are listed in appendix VI. At the state level, the WIC program is administered through 88 state-level WIC agencies and a network of over 2,000 local agencies. The 88 state- level WIC agencies, which received program funding in fiscal year 2001, include agencies in all 50 states, the District of Columbia, American Samoa, the Commonwealth of Puerto Rico, Guam, the U.S. Virgin Islands, and 33 Indian Tribal Organizations. We obtained most of the data used to address our report objectives from the responses to a survey on the use of infant formula we sent out in June 2002 to 51 WIC agencies (48 states, the District of Columbia, the Navajo Nation tribal organization, and Puerto Rico). These agencies collectively represented over 97 percent of the WIC infant participants in fiscal year 2001 and they primarily relied on the competitively bid rebate contracts with infant formula manufacturers to comply with federal cost containment requirements for infant formula. All 51 WIC agencies receiving our survey responded. However, some agencies were unable to answer every survey question due to the unavailability of some data. Of the 88 WIC agencies that received program funding in fiscal year 2001, we excluded 37 agencies from our survey. Seventeen were excluded because they were exempted from continuously operating a cost containment system for infant formula that is implemented in accordance with 7 CFR 246.16a, Infant Formula Cost Containment. Two WIC agencies, Mississippi and Vermont, were exempted because they did not use retail stores for distributing infant formula to their WIC participants. Mississippi uses a direct distribution delivery system under which participants pick up formula from storage facilities operated by the state or local agency. Vermont uses a home delivery system under which formula is delivered to the participant’s home. Fifteen Indian tribal organizations were exempted because they served 1,000 or fewer WIC participants. Another 20 WIC agencies (Guam, Virgin Islands, American Samoa, and 17 other Indian tribal organizations) we judgmentally excluded from our survey because they served fewer infant participants in fiscal year 2001 than Wyoming, the smallest WIC state agency. Our survey was necessary because data on the use of contract standard, noncontract standard and nonstandard infant formula by WIC agency was not available from FNS. In addition, some of the WIC agencies did not account for the number of infants receiving each type of formula. As a result, 3 of the 51 agencies we surveyed were unable to provide any data on the number of infants using each type of infant formula in February of 2000, 2001, or 2002. Another 9 agencies could provide only partial data. Of the agencies that provided data on the number of infants using each type of formula in each of the three years, some had to estimate the number of infants receiving each type of formula based on the number of cans of formula issued and still other agencies had to make special analyses of computerized data that took up to two months to complete. We did not independently verify the accuracy of the information these agencies reported to us and we did not examine the effectiveness of their policies or practices. However, when we completed our analysis of agency data we did contact several agencies that had very low or very high usage of either noncontract standard or nonstandard formula to verify the correctness of the data they had provided. Several of these agencies provided us with revised formula usage data in response to our inquiries. Our survey was designed to determine, for each responding WIC agency, the amount of infant formula use for infant participants based on the number of infants that were issued three categories of formula—contract standard, noncontract standard or nonstandard formula during the month of February for the years 2000, 2001, and 2002. The number of infants receiving the three categories of formula was determined to be a reasonable proxy for the extent that infant formula was being used and it was a common measure that could be obtained from most WIC agencies. Also, we limited the infant use data collected and the amount of rebate dollars received to just one month for each year to minimize the work required by WIC agencies responding to our survey. We used the month of February because that was the most current month in 2002 we could use and still expect to receive information on the amount of rebate dollars received or billed for, considering the lag time typically required for WIC agencies to determine the amount of rebate dollars they will receive for a given month for contract standard formula purchased. In determining what research says about the extent that infants are adversely affected by switching to a different brand of standard infant formula intended for normal healthy babies, we performed an extensive literature search and we used a question in our survey of 51 WIC agencies to ask if they were aware of any studies or research that have addressed how switching standard formulas affects infants. Also, considering the possibility that changing infant formula manufacturers might lead to an increase in the use of noncontract standard formula, we used another survey question to ask each responding WIC agency to describe how changing its contract to the current infant formula manufacturer may have affected their infant participants’ use of noncontract standard infant formula. In addition to conducting the survey, we discussed WIC infant formula use with officials at WIC agencies and at FNS headquarters and regional offices, and we reviewed relevant regulations and research. To determine whether WIC agencies restricted the use of noncontract standard formula, we primarily relied on the answers to a survey question which asked what the WIC agency’s current policy was on the use of noncontract standard formula, and we also obtained copies of the WIC agencies’ policies pertaining to the use of noncontract standard formula. To determine the extent that infants in the WIC program receive noncontract standard formula we relied on a survey question which asked, during the month of February in each of the years 2000, 2001, and 2002, how many infants each WIC state agency provided with each of the three categories of formula. First, the WIC agencies reported all infant formula used for which rebates were received. In addition, they reported all infant formula used for which no rebates were received, and this no-rebate- received category was provided in two parts: noncontract standard formula and nonstandard formula. Therefore, we assumed all nonstandard formula reported to be noncontract formula, that is, not included in contracts for rebates from infant formula manufacturers. In estimating the dollar effect of using noncontract standard formula, we assumed that all infants that used noncontract standard formula could and would have used contract standard formula if noncontract standard formula had been prohibited from use. Also, assuming that the retail price of contract and noncontract standard infant formula was the same, the rebate dollars foregone would equal the net cost to the WIC agencies. To estimate the dollar effect of using noncontract standard formula, we multiplied the number of infants provided noncontract standard formula in February 2002 for each of the 47 WIC agencies that provided data times the average rebate received per infant by that agency to obtain the amount of rebate dollars forgone. Computations made to estimate the rebate dollars foregone by each of 19 WIC agencies with noncontract standard use in excess of the 3.3 percent average for all agencies that reported data in February 2002, are as follows: (1) we multiplied the total infants receiving formula by 0.033 to obtain the number of infants required to attain a 3.3 percent noncontract standard formula usage rate, (2) we subtracted the number of infants required to attain a 3.3 percent noncontract standard formula usage rate from the total infants that received such formula to obtain the number of infants receiving noncontract standard formula in excess of the 3.3 percent rate, and (3) we multiplied the number of infants receiving noncontract standard formula in excess of 3.3 percent by the average monthly rebate received per infant using contract standard formula to obtain the number of rebate dollars foregone. The total of all rebate dollars foregone by each agency in February was multiplied by 12 to obtain an estimated annual effect of using noncontract standard formula. This a conservative estimate because February is the shortest month of the year. Data for these calculations were derived from responses to survey questions. In addition to those named above, Chuck Novak, Stan Stenersen, and Ron Wood made key contributions to this report. Luann Moy provided important consultation on methodological issues for the WIC agency survey.
The Department of Agriculture's Food and Nutrition Service (FNS) provided about $3 billion to state agencies in fiscal year 2001 for food assistance, including infant formula, through its Special Supplemental Nutrition Program for Women, Infants and Children (WIC). Most infants receiving formula are given a milk- or soy-based standard formula. To stretch program dollars, each state WIC agency contracts with a single company for purchases of that company's standard formula for which they receive rebates. These rebates totaled $1.4 billion in fiscal year 2001. Rebates do not apply to other companies' brands of standard formula (noncontract standard formula) or to nonstandard formulas designed to meet special medical or dietary conditions. GAO was directed to examine the extent that WIC agencies have restricted the use of noncontract standard formula to lower cost of the WIC program. As of February 2002, all 51 of the state WIC agencies included in our survey had policies to restrict the use of noncontract standard formula. Three of the 51 agencies prohibited the use of this formula entirely. The other 48 agencies restricted its use to specific situations, such as if medically prescribed or if needed for religious reasons. Seven of these 48 agencies also set percentage limits, such as 4 percent of all standard formula issued, on the use of noncontract standard formula. In fiscal year 2002, 3.3 percent of the infants using formula in the WIC program received a noncontract standard formula, while 90.3 percent received the contract brand. The remaining 6.4 percent received a medically prescribed nonstandard formula for special medical or dietary needs. There were wide variations between WIC agencies in the percentage of infants who received noncontract standard formula, ranging from a low of zero, for the 3 agencies that prohibited its use, to 10.5 percent. Likewise, the percentage of infants receiving medically prescribed nonstandard formula ranged from 0.2 percent to 27.7 percent. FNS has not routinely collected from WIC agencies the data that would allow it to monitor the effectiveness of these agencies in restricting the use of either noncontract standard or nonstandard infant formula. Buying noncontract standard formula brands cost the WIC program an estimated $50.9 million in foregone rebates in fiscal year 2002. Although it may be neither feasible nor desirable to prohibit all purchases of noncontract standard formula, rebates would have increased by $13.8 million if every state had a noncontract standard formula usage rate no higher than the average of 3.3 percent reported across all agencies.
Several security incidents in the late 1990s highlighted the need for improvements at DOE. For example, the possible loss of nuclear weapons design information and the “missing” computer hard drives at Los Alamos National Laboratory revealed important weaknesses in security. More broadly, many reports have criticized DOE security: the President’s Foreign Intelligence Advisory Board report, the Cox Committee report,and a number of our reports on particular aspects of DOE’s security program. In response to individual events and reports, DOE, and later NNSA, developed initiatives intended to address nuclear security problems. Numerous initiatives were undertaken to strengthen, among other things, personnel, physical, information, and cyber security as well as DOE’s counterintelligence program. Because of their importance, the initiatives were in many cases special efforts undertaken outside the established departmental processes for policy development, which include, among other things, the opportunity for all affected parties to review and comment on proposed policies. DOE and NNSA security activities associated with the initiatives generally fall under two major offices in each organization. For DOE headquarters, these are the Office of Security and the Office of Counterintelligence. The Office of Security is responsible for establishing policies and procedures to protect, among other things, nuclear materials and information at all DOE and NNSA facilities at headquarters and in the field. The Office of Counterintelligence is responsible for setting counterintelligence policy for DOE and NNSA, as well as gathering information and conducting activities to protect against espionage and other intelligence activities at non-NNSA sites. For NNSA, the two major offices are the Office of Defense Nuclear Security and the Office of Defense Nuclear Counterintelligence. These offices administer and manage security and counterintelligence functions within NNSA. Security activities are also carried out in the field at DOE and NNSA operations offices, area offices, laboratories, and production facilities. NNSA’s field structure includes national weapons laboratories, production facilities, and naval reactors program sites. Among the three national laboratories are Lawrence Livermore in California and Sandia in New Mexico, which conduct research and development for the nuclear weapons program and a broad range of nonnuclear research. The Pantex Plant in Texas is one of four production sites. Pantex assembles and disassembles nuclear weapons; stores nuclear weapons components on an interim basis; and develops, fabricates, and tests explosive components for nuclear weapons. The Bettis Atomic Power Laboratory in Pennsylvania is one of two naval reactor laboratories. Among other activities, Bettis conducts research, designs new reactor and propulsion systems, and provides technical expertise to the Navy’s nuclear fleet. DOE and NNSA have implemented 64 percent of the 75 nuclear security initiatives developed since 1998. Of the remaining initiatives, most are to be completed by December 2002. Successful implementation of the initiatives can enhance security at NNSA facilities. There are three lessons to be learned from implementing these initiatives that can help ensure future initiatives achieve their intended benefits. First, field perspectives should be fully considered in the development of initiatives. Some initiatives, such as the development of a new foreign visits and assignments database, were developed without fully considering the perspectives of contractor and NNSA staff in the field, leading to operational inefficiencies and staff frustration. Second, initiatives should be clearly communicated to the field. Initiatives were not always clearly communicated to the field, resulting in confusion among contractor and NNSA field staff regarding what requirements they needed to implement. Third, a coordinated process for implementing initiatives could be beneficial. Some sites did not have a coordinated process for implementing initiatives, although at the Pantex Plant we observed a potential best practice in which a team approach for implementing initiatives had been developed. These lessons to be learned do not pertain to the naval reactors program because of its unique security structure and program within NNSA. DOE and NNSA have made progress in implementing the 75 nuclear security initiatives developed since 1998. As of January 2002, 48—or 64 percent—of the initiatives had been completed. DOE and NNSA report that 19 initiatives will be completed by December 2002 and that one will be completed in 2007. DOE and NNSA do not have expected completion dates for the remaining seven initiatives. Table 1 shows the general status of the initiatives, while appendix II provides details on the status of each initiative. Successful implementation of the initiatives can reduce the likelihood of security problems and therefore enhance security at NNSA facilities. For example, DOE has eliminated the backlog of security clearance investigations and reinvestigations of employees with access to classified information. Eliminating this backlog ensures that those employees with access to classified information have had their backgrounds checked and that cleared personnel needed in important mission-related areas are available for work. Other initiatives can strengthen controls over cyber security. For example, DOE has published 29 cyber security directives for classified and unclassified systems and has provided cyber security training for system administrators and managers. In addition, the counterintelligence program has been improved. For example, DOE and NNSA have integrated counterintelligence and foreign intelligence operational and analytic efforts throughout the nuclear weapons complex. This integration should lead to improved analyses by counterintelligence personnel at headquarters and in the field due to their increased access to the expertise of, and information available through, foreign intelligence staff. DOE and NNSA have 27 initiatives that are still in progress. These initiatives address a broad range of security areas, including information security, physical security, nuclear material accountability and control, cyber security, and counterintelligence. According to DOE and NNSA, 19 of these initiatives will be completed by December 2002. Another initiative, intended to improve communication with employees regarding security, will be completed in 2007. DOE and NNSA could not provide specific completion dates for the remaining seven initiatives. Two of the seven are cyber security initiatives related to the implementation of a cyber security architecture program and the development of a research and development capability for DOE. As such, according to DOE officials, these initiatives represent continuous efforts. For the other five, DOE and NNSA officials told us they could not develop reasonable completion dates. For example, DOE officials said that they do not have a completion date for the initiative to encrypt selected classified electronic media because they are waiting for the National Institute of Standards and Technology to provide a list of qualified vendors that meet the new advanced encryption standard. Three lessons can be learned from DOE’s and NNSA’s experience in implementing the initiatives that can help ensure future initiatives achieve their intended benefits. First, field perspectives should be fully considered in the development of initiatives. Second, initiatives should be clearly communicated to the field. Third, a coordinated process for implementing initiatives could be beneficial. Contractor and NNSA field staff at three sites told us that their perspectives were not fully considered in the development of initiatives. The initiatives were typically formulated at headquarters by security staff without full review, comment, or discussion from the field. In contrast, for proposed policies and directives, DOE and NNSA have a formal review and comment process in place, through which field staff can provide input. For example, according to contractor staff at the two national laboratories we visited, field perspectives on system specifications were not fully considered in the development of DOE’s new foreign visits and assignments database. As a result, it is incompatible with local databases at these two sites. The volume of foreign interactions at these sites makes this problem significant. Because of the database incompatibilities, information must be manually entered into DOE’s database by contractor staff at these sites, rather than being uploaded electronically. Further, at one of these sites, DOE’s database is being used only on a limited basis because of these problems. Contractor officials at the two sites said that had they been involved more when this initiative was being developed, these problems might have been avoided or reduced. Office of Security officials admitted that participation by field staff was constrained by the fast track approach to implementation. However, these officials pointed out that since the database became operational, field staff have been actively included in continuing program development, system enhancement, and training activities. Another example of difficulties caused by the lack of full consideration for field perspectives occurred in an initiative requiring a departmentwide inventory of electronic media containing certain classified information. This initiative required a complete inventory at all sites, within 30 days, of all electronic media containing certain classified information. Contractor officials at three sites told us that problems they experienced implementing this initiative might have been foreseen and mitigated if field perspectives had been more fully considered in its development. For example, security staff at the three sites said that unclear wording in the initiative led to confusion and debate as to what media and information were actually covered by the initiative. Ultimately, staff at each site interpreted and implemented the initiative based on their local decisions as to its meaning and intent. Further, staff at two sites told us that the requirement to complete the inventory within 30 days was unrealistic given the quantity of affected media at their sites. As a result, their efforts were rushed and some aspects of the inventory, such as inaccurate reading of bar codes at one site, caused difficulties that they were still trying to resolve at the time of our visits. Contractor and NNSA field staff at three sites told us that the initiatives were not always clearly communicated to them from headquarters. There was no systematic, uniform process in place for notifying sites of initiatives, and in some cases the initiatives were communicated through web sites, memorandums, and word of mouth. For example, contractor officials at one national laboratory told us that multiple offices within DOE and NNSA provided guidance to them on some cyber security initiatives, often through informal means such as web site postings or verbal communication. This lack of clear communication produced confusion at the site about which requirements they needed to implement. In regard to two physical security initiatives, there is some confusion as to who is responsible for their completion. One of these initiatives addresses the hiring of additional security personnel and security maintenance technicians; the other addresses accelerating upgrades to physical safeguards and security. Headquarters states that these are primarily field initiatives, while contractor security staff at three sites we visited told us that they had received no guidance on or notification of these initiatives and did not know how the initiatives pertained to their sites. Although each of the sites had ongoing activities for improving physical security, the activities were not a result of the initiatives. Rather, the activities were an outcome of either internal site security assessments or external reviews by DOE’s Office of Independent Oversight and Performance Assurance. In light of the attacks of September 11, 2001, both of these initiatives may be of increased importance, and the need to clearly communicate to field sites the intended actions and outcomes associated with them is even more crucial. Contractor and NNSA officials at Pantex have developed a formal, coordinated process for rapidly implementing initiatives as they are announced from headquarters. Under this process, as soon as site staff become aware of a new initiative, key contractor and NNSA officials from all security areas meet as a team to develop an initial implementation plan for the initiative. The team identifies all those individuals and offices that should be involved in implementation, the potential impacts on the overall security program, the best way to ensure that the initiative is implemented effectively, and the associated costs and other resource requirements. The result is early buy-in from all security areas regarding the site’s implementation strategy, not just from the security area most affected by the initiative. Importantly, the development and successful use of this rapid implementation process has been formally incorporated into the Pantex site contract as a performance objective for the contractor. Pantex staff told us that this process has worked well for them and has allowed them to quickly respond to initiatives in a way that minimizes implementation problems. For example, they said that by using this process, Pantex was able to move more efficiently to determine a strategy for interpreting and implementing the required inventory of classified electronic media that caused more problems at other sites. In contrast, at two field sites, implementation of initiatives was conducted primarily by contractor staff in the security area most affected by the initiatives, rather than with the coordinated input of staff from all security areas. While staff at these locations were generally able to implement the new requirements, a team approach involving staff with other areas of security expertise and responsibility might have helped identify more efficient or effective alternative implementation strategies. Further, this broader involvement might have provided insights into unintended outcomes of implementation for the overall security program and ways to avoid or minimize them. Therefore, the process at the Pantex Plant could be a potential best practice for other NNSA sites to consider. Since NNSA’s creation, its officials have taken some steps to develop a security structure and program, including staffing offices, developing guidance, reviewing security policies and procedures, and initiating actions to create a security-oriented culture. Additionally, in response to the September 11 terrorist attacks, both headquarters and NNSA field sites have taken a number of short-term actions to improve security and have initiated other long-term activities aimed at strengthening their security structure and program. However, several key issues still need to be addressed to ensure an effective security structure and program. First, NNSA’s overall organizational structure is not completely functional, including the newly established facilities and operations office, which is to oversee, among other things, implementation of NNSA’s safeguards and security program and coordinate with field sites. Second, the roles and authorities between DOE and NNSA security offices have not been clearly articulated, resulting in confusion and uncertainty among contractor and NNSA field staff regarding what policies they are required to implement and which offices have authority over them. Third, methods for evaluating the effectiveness of security are still being developed, with NNSA’s counterintelligence program just beginning to explore the development of such methods, and NNSA’s security program not yet having begun such an effort because of other higher priorities. NNSA officials have taken some steps to develop a security structure and program. In this regard, both the Office of Defense Nuclear Security and the Office of Defense Nuclear Counterintelligence have brought on staff to perform headquarters functions. As of January 2002, the Office of Defense Nuclear Security had reached its goal of 7 staff, including the chief, and the Office of Defense Nuclear Counterintelligence had filled 9 of its 11 staff positions, including the chief. Both offices have also begun developing guidance for implementing DOE policies and procedures at NNSA facilities. For example, Defense Nuclear Security has issued an initial “Implementation Bulletin” for DOE’s Safeguards and Security Program order, which provides guidance on how this order should be implemented at NNSA facilities. The order is the foundation for many security activities throughout the nuclear weapons complex. The issuance of the bulletin for this order was a needed first step toward adapting DOE policies for NNSA’s use. The office’s work on other implementation bulletins was delayed by its focus on responding to the events of September 11. However, bulletins for some key safeguards and security areas are being drafted, with issuance expected by early spring of 2002. NNSA, along with DOE, is also completing work associated with a comprehensive 6-month review of existing and draft security policies and procedures. The working teams that conducted the review were composed of headquarters and field staff, including federal and contractor employees. The working teams identified three categories of issues related to problem policies and procedures. These were (1) those about which there was confusion regarding implementation or interpretation, (2) those for which the language needed clarification or where minor policy changes were needed, and (3) those for which there was a fundamental difference of opinion among team members regarding appropriate departmental policy. To correct the identified problems, NNSA and DOE will address the policies and procedures in each of the three categories in different ways. Specifically, an NNSA implementation bulletin will be developed for each policy and procedure in the first category; the Field Management Council will review those in the second category; and a decision by the secretary of energy will be required for the third category, if a change is deemed appropriate. The report on the outcomes of this comprehensive review, and related recommendations, is still in draft form and has not yet been publicly released. Along with these activities, NNSA has also initiated actions to create a security-oriented culture in its organization. For example, NNSA’s and DOE’s counterintelligence offices have completed a self-initiated communications effort to support counterintelligence awareness throughout NNSA and DOE. This effort included the completion of a comprehensive communications/awareness strategy and the establishment of a task force with membership from counterintelligence offices across the DOE/NNSA complex to monitor progress, share information, and maintain program momentum. The effort also included the development of a communications “tool kit,” which was provided to all senior counterintelligence officers across the complex for use in their awareness presentations. These presentations are an ongoing part of routine counterintelligence program activities. Similarly, Defense Nuclear Security has begun a self-initiated program called “Integrated Safeguards and Security Management.” Among the guiding principles of this program are individual responsibility for and participation in security, as well as line management responsibility for safeguards and security. The purpose of this program is to integrate security awareness into management and work practices at all levels and to ensure that all employees from management on down perceive security as a fundamental component of their day-to-day activities. The program should be fully implemented by the end of 2002. According to NNSA officials, establishing an effective security structure and program is a long-term process. The chief of defense nuclear security described his program as “a work in progress” and told us that he envisions a 3-year process for program development. He said that the first year—in which he is currently working—entails solving problems, such as the organizational structure, and understanding the budget. The second year will focus on setting up the security budget process within NNSA and “winning the hearts and minds” of employees. The third year will involve assessing the previous 2 years’ actions and making corrections as needed. Similarly, the chief of defense nuclear counterintelligence told us that her program is still evolving and that fully establishing it will require various actions over the course of several years. Along with these internal plans and activities, the scope and direction of NNSA’s security structure and program may also be affected by external events such as the terrorist attacks of September 11. Because of this, it seems inevitable that new initiatives will be developed in the future that will affect program goals and directions. In response to the September 11 terrorist attacks, both headquarters and NNSA field sites took a number of short-term actions to improve security. For example, immediately following the attacks, these NNSA facilities instituted a heightened state of alert, or security condition, in accordance with DOE orders. In conjunction with this heightened condition, security measures were enhanced to include additional barriers and access controls, increased vehicle searches, and increased patrols of perimeters and critical facilities. In addition, emergency operations centers at headquarters and in the field were staffed, and DOE and NNSA headquarters security personnel provided threat advisories and security recommendations to field sites via complexwide videoconferences. Further, headquarters counterintelligence staff distributed information to field personnel on threats from foreign intelligence activities, and site counterintelligence officers provided briefings to site management and other employees on these threats. Counterintelligence staff also took steps to increase their liaison with outside agencies, including the Federal Bureau of Investigation. As a result of the September 11 attacks, NNSA also began several long- term activities to strengthen its security structure and program. For example, the weekend after the attacks, NNSA initiated a vulnerability assessment of its high-risk targets. This “72-Hour Security Review” rated NNSA facilities against various criteria, including the possibility of nuclear detonation; radiological dispersion; and loss of program capability, technical staff, and life. In addition, as part of this review, each site was asked to identify vulnerabilities and the projected costs of correcting them. From this review, NNSA compiled a prioritized list of needed security improvements. In addition to this review, NNSA established a 90-day Combating Terrorism Task Force to review headquarters and field actions to protect NNSA interests. The task force has initiated work to revise a key DOE/NNSA security planning document—the Design Basis Threat. Other task force activities include site-by-site security review and vulnerability assessments, an assessment of nuclear materials management practices, and reviews of personnel security and transportation security. The director of security for the naval reactors program told us that his program’s actions since September 11 were consistent with those taken by DOE and the rest of NNSA. Naval reactors participated in the 72-Hour Security Review, and it is assessing identified vulnerabilities and determining requirements for short- and long-term actions. Despite the actions that NNSA has already taken to develop a security structure and program, several key issues still need to be addressed to ensure that the structure and program is effective and to build upon the benefits of the initiatives. First, NNSA’s overall security structure is not completely functional. Second, the roles and authorities between DOE and NNSA security offices have not been clearly articulated. Third, methods for evaluating the effectiveness of security are still being developed. In May 2001, NNSA’s administrator identified a proposed structure for his organization. This structure includes staff offices such as Defense Nuclear Security and Defense Nuclear Counterintelligence, program offices such as Defense Nuclear Nonproliferation and Defense Programs, and support offices such as Management and Administration and Facilities and Operations. However, in December 2001, we reported that a clearly delineated overall organizational structure still did not exist. In addition, during our review, headquarters staff, as well as contractor and NNSA field officials at three of the sites we visited, told us that NNSA’s overall organizational structure is not completely functional. For example, the structure includes a new facilities and operations office to oversee, among other things, implementation of safeguards and security programs and coordinate with field sites. While the office was formally established in October 2001, it is not yet clear how the office will function with other NNSA offices. Of particular concern to some contractor and NNSA field staff is how the line of authority for security accountability will be carried out regarding this new office and existing NNSA operations and area offices. In this regard, staff were not sure which offices would be in charge of what activities, to whom contractor staff would report, and from whom contractors would receive direction. While contractor and NNSA field staff we spoke with were generally hopeful that the new facilities and operations office might be a positive step, a few were concerned that it might simply add another layer of bureaucracy to NNSA’s organization. Other areas of uncertainty related to the facilities and operations office included how the directors of NNSA’s national laboratories would fit into this organizational structure and where security staff assigned to the office would be located (whether at headquarters or in the field). The chief of defense nuclear security, who will also temporarily be in charge of the security component within Facilities and Operations, told us that his current plan calls for about 23 or 24 security staff, with some located in the field. He also told us that the mission and functions of the security component within Facilities and Operations are more clearly delineated in the administrator’s progress report. As of February 1, 2002, this report was undergoing internal review. Because of the broad scope and various locations of DOE and NNSA security activities, a clear understanding of roles and authorities between DOE and NNSA security offices is essential for an effective security program to be implemented at NNSA. However, some NNSA headquarters staff, as well as both contractor and NNSA field staff at three sites, told us that the roles and authorities between DOE and NNSA security offices have not been clearly articulated. NNSA and DOE headquarters counterintelligence officials have a memorandum of understanding between their two offices that delineates their respective roles and authorities. However, contractor and NNSA field staff at two sites told us the memorandum has not worked in practice because they still receive direction from both offices, resulting in a sense in the field that they “serve two masters.” The heads of the two counterintelligence offices told us that they recognize this problem and that they are working to develop additional guidance clarifying roles and authorities. NNSA’s Office of Defense Nuclear Security and DOE’s Office of Security do not have any memorandum of understanding. According to the chief of defense nuclear security, he and DOE’s director of security meet on a regular basis when resolution of issues is warranted. Further, he said that although no general memorandum of understanding is planned between the two offices, memorandums for specific areas such as classification might be developed. However, some contractor and NNSA field staff at two sites told us that they receive guidance from both NNSA and DOE security offices. This has resulted in confusion and uncertainty about which policies contractors and field staff are required to implement and which offices have authority over them. For example, NNSA security staff at one site said that contradictory input received from DOE and NNSA during the development of a fundamental security planning document— the Site Safeguards and Security Plan—led to confusion and frustration regarding what needed to be done in order to have the document approved. Further, these staff told us that they questioned why DOE was involved in the process at all, since their understanding was that NNSA has sole responsibility for implementing security policies in the field. The chief of defense nuclear security told us that the security component of the newly established facilities and operations office is expected to help address this type of problem in the future. Methods for evaluating security, both qualitative and quantitative, provide a way to assess the effectiveness of, and improvements in, all aspects of the security program. NNSA and DOE officials do not yet have such methods in place. Without these methods, NNSA and DOE cannot determine the impact of individual initiatives or the effectiveness of their security. These evaluation methods can also lead to the establishment of security-related performance measures, which could assist the agencies in preparing the annual performance plan required by the Government Performance and Results Act of 1993. In this regard, we have identified problems with DOE’s security-related performance measures in its annual performance plan. Specifically, some performance measures DOE has been using do not really assess the overall effectiveness of security or improvements in performance. Rather, these measures are process- oriented, focusing on whether specific security activities are carried out. NNSA’s and DOE’s counterintelligence offices have begun to jointly explore the creation of a set of metrics for evaluating the effectiveness of their activities. In this regard, they have been working with Department of Defense counterintelligence officials to learn from and establish benchmarks against that agency’s program. Additionally, they plan to involve contractor and NNSA field staff in this effort. NNSA and DOE counterintelligence officials told us that, presently, their program cannot assess the value added from an activity. Eventually, they hope that they will be able to evaluate effectiveness and improvements in all aspects of their program. These officials also said that their metrics development effort should take several years to complete. NNSA’s Office of Defense Nuclear Security has not yet begun to develop such methods because of higher priority work. However, it has incorporated some goals, strategic indicators, and performance measures into its strategic planning documents. The chief of this office told us that methods for assessing the progress of his program are at least a year away and that the methods will likely be qualitative rather than quantitative in nature. He further told us that approaches to evaluating his security program are likely to change due to world events. DOE’s Office of Security has a separate effort underway to produce new metrics for evaluating progress in its programs. This effort initially focused on cyber security but was expanded to include the full range of DOE security activities overseen by this office such as physical, personnel, and information security. As with NNSA’s efforts, DOE officials expect their metrics development process to be a long-term undertaking. The terrorist attacks of September 11, 2001, bring into sharp focus the necessity for all federal agencies to take seriously threats to their assets. In light of these attacks, agency efforts to enhance security take on even greater urgency, especially in relation to the protection of assets in the nation’s nuclear weapons complex. DOE and NNSA have made progress in implementing many of the nuclear security initiatives developed since 1998. There are lessons to be learned from the implementation of these initiatives. These lessons can be very important for any initiatives stemming from the September 11 attacks. Involving contractor and NNSA field staff in the development of new initiatives, communicating them clearly to those charged with implementation, and establishing coordinated processes at field sites to implement new requirements would enhance NNSA’s ability to quickly and effectively institute new security activities. NNSA has also made progress in developing a security structure and program. As noted in this report, for this structure and program to be most effective, NNSA must ensure that its overall organizational structure is fully functional, clarify roles and authorities, and continue its efforts to develop methods for evaluating program effectiveness and improvement. NNSA has recognized these issues and has efforts underway to make the overall organizational structure fully functional and develop methods for evaluating the effectiveness of the security program. Nevertheless, both NNSA and DOE could benefit from clarifying the roles and authorities of various security offices. We are making recommendations to the secretary of energy and the NNSA administrator aimed at ensuring that the lessons to be learned from prior initiatives are incorporated into the development and implementation of future initiatives. We are also making a recommendation to better ensure the development of an effective NNSA security structure and program. Ensure that contractor and NNSA field staff are substantively involved in the development of security initiatives and that such initiatives are clearly communicated to the field. Consider requiring NNSA field sites to develop a coordinated implementation process that would allow contractor and NNSA staff to quickly address and implement initiatives, using the team approach followed at the Pantex Plant as a potential best practice for other sites. Clearly define roles and authorities of DOE and NNSA security and counterintelligence offices to ensure that contractors and NNSA field staff understand what policies they are required to implement and which offices have authority over them. We provided DOE and NNSA with a draft of this report for review and comment. They concurred with all three of our recommendations. They believe that many elements of the NNSA administrator’s recently issued February 25, 2002, report to the Congress on the organization and operations of NNSA will address our recommendations. In our view, while there are promising elements of that report, such as establishing clear lines of authority between NNSA and its contractors and promising to hold federal staff and contractors more accountable for performing NNSA’s missions, it is only a framework for their eventual reorganization. Accordingly, it is not clear from DOE’s and NNSA’s comments how the February 25 report will address certain aspects of our recommendations. For example, we are recommending that NNSA consider requiring its field sites to develop a coordinated implementation process to respond to security initiatives that modeled what we saw at Pantex. The comments from DOE and NNSA note that the new organizational structure will allow for dynamic interaction to achieve goals quickly. It is not clear how this responds to our recommendation. Further, we are recommending that there be clearly defined roles and authorities of DOE and NNSA security offices. The comments imply that the organizational structure and functions laid out in the February 25 report will clarify for field staff the roles and authorities of the separate security offices in DOE and NNSA. However, the report does not address some of the issues we identified through our work regarding how DOE and NNSA security offices interact and function together. NNSA is developing a plan with milestones to guide the myriad details needed to successfully implement its reorganization. Including specific activities and corresponding time frames regarding our recommendations in this implementation plan would help ensure that they are effectively addressed. DOE and NNSA also made a general comment related to the process used at Lawrence Livermore National Laboratory for implementing security initiatives. They stated that Livermore’s process, while less formalized than the one at Pantex, is coordinated, integrated, effective, and successful. We agree that Livermore’s process has been successful, but we believe that a more formal coordinated process such as that used at Pantex would be beneficial for Livermore and others to consider. In our view, the process at Pantex provides the greatest assurance that initiatives will be implemented in the most effective and efficient manner, with the highest level of accountability. Finally, DOE and NNSA made specific comments of a technical nature that we incorporated as appropriate. DOE’s and NNSA’s comments are provided in appendix III. To address our objectives, we interviewed officials and obtained documents from DOE, NNSA, and contractor officials. Further, we visited DOE and NNSA headquarters, as well as selected NNSA field sites. Our scope and methodology are discussed in detail in appendix I. We performed our review from January 2001 through January 2002 in accordance with generally accepted government auditing standards. As arranged with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after the date of this letter. At that time, we will send copies of the report to the secretary of energy, the administrator of NNSA, the director of the Office of Management and Budget, and appropriate congressional committees. We will make copies available to others on request. To determine the extent to which Department of Energy (DOE) and National Nuclear Security Administration (NNSA) security initiatives had been implemented at NNSA facilities, we worked with DOE and NNSA headquarters offices to develop a comprehensive list of all nuclear security initiatives since 1998. The primary offices with which we worked were DOE’s Office of Security and Office of Counterintelligence and NNSA’s Office of Defense Nuclear Security and Office of Defense Nuclear Counterintelligence. We identified 75 nuclear security-related initiatives based on our review of presidential decision directives, announcements by the secretary of energy or other high-ranking department officials, and initiatives begun by DOE and NNSA security offices between February 1998 and January 2001. We excluded from our review several other initiatives from this time period because they did not relate to nuclear security, they were begun by and pertained only to the unique naval reactors program, or they were no longer applicable because the organizations affected by them either no longer existed or had indefinitely suspended operations. We did not assess whether these 75 initiatives addressed all security problems at DOE and NNSA. For the 75 initiatives, we asked NNSA and DOE to provide us with information on the status of, and actions or plans associated with, each. For those initiatives identified as completed, we collected documents and interviewed officials to independently verify their completion. We also visited selected field sites that are representative of the various aspects of NNSA’s work to determine whether the initiatives requiring field implementation were in place at these sites. Specifically, we visited Lawrence Livermore National Laboratory in California, Sandia National Laboratories in New Mexico, the Pantex Plant in Texas, and the Bettis Atomic Power Laboratory in Pennsylvania. Livermore and Sandia are national laboratories, Pantex is a production facility, and Bettis is a naval reactors program site. At each location, we met with both federal and contractor officials, obtained pertinent supporting documentation, and verified through physical observation and other means the extent of implementation. To determine the extent to which NNSA has developed an organizational structure for security and a program to safeguard nuclear information and materials, we interviewed DOE and NNSA headquarters officials, as well as NNSA and contractor officials in the field. We also reviewed policy and planning documents, including orders, implementation guidance, and reports. We collected information on actions taken by DOE and NNSA in response to the September 11 terrorist attacks, but we did not evaluate the implementation of these actions. Completed. Completed. Completed. Completed. Actions to amend contracts and finalize order are in progress. Contracts are expected to be amended once the draft order is signed by the secretary of energy, anticipated in early 2002. Completed. Ensure that laboratory counterintelligence personnel have direct access to laboratory directors and concurrently report to DOE’s counterintelligence director. Transfer DOE counterintelligence oversight from operations and field offices to headquarters. Prepare, within 90 days of the director’s arrival, a report to the secretary to include a strategic plan for achieving long-term goals and recommendations on strengthening the counterintelligence program.Initiate an internal inspection process to review annually the counterintelligence program and provide results to the secretary. Integrate counterintelligence and foreign intelligence operational and analytic efforts throughout DOE and the laboratories. Completed. Completed. Completed. Completed. Actions related to identification and protection of sensitive unclassified information are in progress. Completion is expected in early 2002. Completed. Advise the assistant to the president for national security affairs, within 120 days, on the actions taken and specific remedies designed to implement Presidential Decision Directive 61.May 1998 Appoint departmental officials to be responsible for internal critical infrastructure protection.March 1999 Develop counterintelligence Inquiry Management and Analysis Capability pilot program.Monitor implementation of counterintelligence plan. Completed. Completed. Completed. Review counterintelligence investigative files. Report annually to the Congress on counterintelligence program. Actions to update order are still in progress. Completion is expected in March 2002. Actions to complete outstanding recommendations are in progress. Completion is expected in early 2002. Actions to review additional files are in progress. Completion is expected in 2002. Completed. Completed. Initiative Hire additional security personnel and security maintenance technicians. Improve and test plans to recover special nuclear materials in the unlikely event they are diverted. Finalize efforts to ensure that materials accounting systems are accurate. Status DOE headquarters officials state that this is a field initiative. However, field sites we visited had not been tasked with actions related to it. Initiative is currently on hold pending receipt of additional budget authority. DOE/NNSA did not provide an expected completion date for this initiative. DOE/NNSA did not provide information on the status or the expected completion of this initiative. Actions to expand and upgrade materials accounting systems are in progress. Completion is expected by fiscal year 2002. Completed. Completed. Eliminate the backlog of reinvestigations of existing security clearances. Establish a counterintelligence and security team to make inspection visits to the five national security laboratories (Los Alamos, Lawrence Livermore, Sandia, Oak Ridge, and Pacific Northwest national laboratories).Order an interim security review in July of the three operations rated marginal.May 1999 Establish Office of Security and Emergency Operations. Completed. Completed. Establish Zero Tolerance Security Policy. Accelerate upgrades to physical safeguards and security. Completed. Actions to bring staffing up to approved levels are in progress. Completion is expected by fiscal year 2002. Completed. Ratings have improved since 1997/1998 and additional actions are in progress. DOE/NNSA did not provide information on the expected completion date of this initiative. Actions related to headquarters upgrades are in progress and scheduled for completion in fiscal year 2002. DOE headquarters states that NNSA and program offices are responsible for field upgrades. However, field sites we visited had not been tasked with actions related to this initiative. Nevertheless, the sites had ongoing activities related to physical security upgrades that they were prioritizing with input from NNSA’s Office of Defense Nuclear Security. Completed. Extend the automatic declassification deadline of Executive Order 12958 by 18 months.Develop cyber security policies for classified and unclassified systems. Twenty-nine directives were published from fiscal years 1999 through 2001. Actions to develop 10 additional directives are in progress. Completion is expected in December 2002. Initiative Establish departmentwide computer security training program for personnel with cyber security responsibilities. Implement cyber security architecture program for the operation of existing systems and the development of future systems. Attain research and development capability to research innovative cyber security protection capabilities and technology. Status Training provided for system administrators/managers. Actions to provide further training and restructure/revise classified computer awareness courses are in progress. Completion is expected in September 2002. Actions to continue departmentwide cyber security infrastructure upgrades are in progress. DOE states that the expected completion date is not relevant since this is a continuous effort. Actions to continue this research are in progress. DOE states that there is no completion date for this initiative since it is an ongoing effort. Completed. Request additional $50 million over fiscal years 2000 and 2001 to support additional cyber security improvements.Create a new Office of Independent Oversight and Performance Assurance to independently evaluate emergency and security operations.Require all facilities to use intrusion detection tools and report all intrusions to counterintelligence and the FBI’s National Infrastructure Protection Center for investigation and analysis. Sign memorandum of agreement between DOE and the FBI to ensure better coordination on DOE security and counterintelligence operations and FBI espionage investigations.July 1999 FV&A Notice and Policy. Completed. Completed. Completed. Completed. Actions to determine the scope of implementation are in progress. Completion is expected in 2002. Completed. Completed. Completed. Completed. Establish an FV&A database.Conduct departmentwide security stand-down for day-long program of security training and education. August 1999 Establish consolidated security budget. Actions to finalize the order are in progress. DOE did not provide an expected completion date for this initiative. Completed. Completed. Completed. Initiative October 1999 Impose moratorium on DOE sensitive country nationals to weapons laboratories.December 1999 Issue final rules governing the use of polygraph examinations to support counterintelligence and security activities at DOE.Enhance verification procedures of authorized personnel access to vaults to record duration and time of access.Have responsible operations/field offices conduct, within 30 days, a comprehensive evaluation of vault procedures with recommendations for policy and procedural improvements across the DOE complex. Completed. Completed. Completed. Completed. Increase security requirements (higher protection level) mandated for classified encyclopedic databases. Completed. Actions to update physical security policies are in progress. Completion is expected in early 2002. Actions are in progress, but on hold until the National Institute of Standards and Technology provides DOE a list of qualified vendors that meet the new Advanced Encryption Standard. Until that time, DOE has implemented interim encryption measures. DOE states that an expected completion date is unknown at this time. Actions to complete the requirements are in progress. DOE states that this initiative has been subsumed by the NNSA “higher fences” initiative. Completion is expected in March 2002. Completed. Complete a DOE-wide mandatory inventory, within 30 days, for electronic media containing compendia of classified information such as that contained on the missing hard drives.Conduct an inventory of all NEST and Accident Response Group databases within 10 days.August 2000 Establish FV&A Policy Review Team.Self-initiated by specific programs/offices Increase security at NNSA via “Higher Fences” Program (Defense Nuclear Security initiative). Completed. Completed. Completed. Actions to finalize the implementation review conference draft report are in progress. Completion is expected in 2002. Establish the Integrated Safeguards and Security Management initiative/personnel education initiative (Defense Nuclear Security initiative). Actions to finalize program are in progress. Completion is expected in March 2002. Actions to define roles and responsibilities are in progress. Completion is expected in early 2002. Actions to involve management are in progress. Completion is expected in 2002. Actions to continue next phase are in progress. Completion is expected in 2002. Initiative Develop communications initiative (Defense Nuclear Security initiative). Status Actions to develop long-range plan and acquire funding are in progress. Completion is expected in 2007. Completed. Develop and implement a counterintelligence collections program within DOE responsive to community collection requirements and supporting DOE analytical requirements (Office of Counterintelligence initiative).Develop communications initiative specifically to support counterintelligence awareness throughout DOE and NNSA (Office of Counterintelligence initiative) Completed. Actions to update and improve the database, such as migrating it to a web-based system, are in progress. Completion is expected in October 2002. Completed. Create Counterintelligence Training Academy (Office of Counterintelligence initiative).Develop foreign visits and assignments “facilitator concept” (Foreign Visits and Assignments Office initiative). Completed. Completed. Initiatives not applicable to the naval reactors program. Department of Energy: Fundamental Reassessment Needed to Address Major Mission, Structure, and Accountability Problems. GAO-02-51. Washington, D.C.: December 21, 2001. NNSA Management: Progress in the Implementation of Title 32. GAO-02-93R. Washington, D.C.: December 12, 2001. Nuclear Security: DOE Needs to Improve Control Over Classified Information. GAO-01-806. Washington, D.C.: August 24, 2001. Department of Energy: Views on the Progress of the National Nuclear Security Administration in Implementing Title 32. GAO-01-602T. Washington, D.C.: April 4, 2001. Information Security: Safeguarding of Data in Excessed Department of Energy Computers. GAO-01-469. Washington, D.C.: March 29, 2001. Major Management Challenges and Program Risks: Department of Energy. GAO-01-246. Washington, D.C.: January 2001. Nuclear Security: Information on DOE’s Requirements for Protecting and Controlling Classified Documents. T-RCED-00-247. Washington, D.C.: July 11, 2000. Department of Energy: National Security Controls Over Contractors Traveling to Foreign Countries Need Strengthening. RCED-00-140. Washington, D.C.: June 26, 2000. Information Security: Vulnerabilities in DOE’s Systems for Unclassified Civilian Research. AIMD-00-140. Washington, D.C.: June 9, 2000. Department of Energy: Views on Proposed Civil Penalties, Security Oversight, and External Safety Regulation Legislation. T-RCED-00-135. Washington, D.C.: March 22, 2000. Nuclear Security: Security Issues at DOE and Its Newly Created National Nuclear Security Administration. T-RCED-00-123. Washington, D.C.: March 14, 2000. Department of Energy: Views on DOE’s Plan to Establish the National Nuclear Security Administration. T-RCED-00-113. Washington, D.C.: March 2, 2000. Nuclear Security: Improvements Needed in DOE’s Safeguards and Security Oversight. RCED-00-62. Washington, D.C.: February 24, 2000. Department of Energy: Need to Address Longstanding Management Weaknesses. T-RCED-99-255. Washington, D.C.: July 13, 1999. Department of Energy: Key Factors Underlying Security Problems at DOE Facilities. T-RCED-99-159. Washington, D.C.: April 20, 1999. Department of Energy: DOE Needs to Improve Controls Over Foreign Visitors to Its Weapons Laboratories. T-RCED-99-28. Washington, D.C.: October 14, 1998. Department of Energy: Problems in DOE’s Foreign Visitor Program Persist. T-RCED-99-19. Washington, D.C.: October 6, 1998. Department of Energy: DOE Needs to Improve Controls Over Foreign Visitors to Weapons Laboratories. RCED-97-229. Washington, D.C.: September 25, 1997. DOE Security: Information on Foreign Visitors to the Weapons Laboratories. T-RCED-96-260. Washington, D.C.: September 26, 1996.
In response to persistent security weaknesses at nuclear weapons facilities during the late 1990s, the Department of Energy (DOE) undertook several initiatives and Congress created the National Nuclear Security Administration (NNSA) as a separate entity with DOE. DOE and NNSA have made progress in implementing many of the 75 initiatives undertaken since 1998. Lessons from these initiatives could help improve implementation of future efforts. DOE and NNSA have completed 64 percent of the initiatives, and most of the rest should be completed by December 2002. NNSA has begun a security organization and program to safeguard nuclear information and materials, but several key issues still need to be addressed to ensure the new program's effectiveness. NNSA has almost completed staffing the two new offices created to lead its security and counterintelligence activities and, with DOE, is completing a detailed review of security policies and procedures. NNSA has also begun specific activities, including training, to create a security-oriented culture in its organization.
SBA’s organizational structure comprises headquarters and regional, district, and area field offices. At the headquarters level, SBA is divided into several key functional areas that manage and set policy for the agency’s programs. Seventeen headquarters offices report to the Office of the Administrator. SBA provides its services to small businesses through a network of regional and district offices that are led by the Office of Field Operations (OFO) and area offices, led by OGCBD, as discussed in greater detail later in this report. Regional offices oversee the district offices and promote the President’s and SBA Administrator’s messages throughout the region. District offices serve as the point of delivery for most SBA programs and services. Four program offices at the headquarters level manage the agency’s programs that provide capital, contracting, counseling, and disaster assistance services to small businesses: the Office of Capital Access, the Office of Entrepreneurial Development, the Office of Disaster Assistance, and OGCBD. OGCBD promotes small business participation in federal contracting through a variety of programs, including programs that provide small businesses with contracting preferences based on socioeconomic designations—the 8(a) Business Development (8(a)), Historically Underutilized Business Zone (HUBZone), women-owned small business (WOSB), and service-disabled veteran-owned small business (SDVOSB) programs. The 8(a) program provides business development assistance to small, disadvantaged businesses and helps them participate in the federal contracting market through sole-source and competitive 8(a) set-aside contracts. The HUBZone program aims to stimulate economic development in economically distressed areas by helping urban and rural small businesses that are located in designated economically distressed areas to access federal procurement opportunities. The SDVOSB program helps service-disabled veteran-owned small businesses acquire federal contracts. The WOSB Federal Contracting program helps women-owned small businesses acquire federal contracts. In addition, SBA administers a prime contracts program, subcontracting assistance program, certificate of competency program, and size determination program to increase federal contracting opportunities for small businesses. These programs, among other things, seek to maximize federal contracting opportunities for small businesses, HUBZone small businesses, women-owned small businesses, and any other firm participating in an OGCBD program. OGCBD has four main offices at the headquarters level: Office of Business Development (which includes the new All Small Mentor-Protégé program), Office of Government Contracting, Office of HUBZone Program, and Office of Policy, Planning and Liaison. The Office of Business Development administers the 8(a) business development program and includes the new Office of All Small Mentor-Protégé which was established in summer 2016 to provide mentor-protégé services to all eligible small businesses. The Office of Government Contracting administers SBA’s prime contracts, subcontracting assistance, WOSB Federal Contracting program, certificate of competency, and size determination programs. The Office of HUBZone Program administers the HUBZone program. The Office of Policy, Planning, and Liaison is responsible for implementing small business government contracting legislation and policy through SBA regulations. SBA’s field-office structure consists of 6 area offices, 68 district offices, and 10 regional offices. Area offices may sometimes be co-located with regional and district offices but differ in mission and function. Area offices report to OGCBD and while headquartered in six cities across the country, cover multiple SBA regional geographic areas encompassing a number of states where contracting activity is most prevalent. The primary function of these offices is to manage government buying activities throughout the country, which includes reviewing potential agency requirements and making recommendations to agency contracting officers on the portion of contracts to set aside for qualified small businesses. This also includes working with federal agencies and small businesses after contracts have been awarded to adjudicate size protests and conduct subcontracting compliance reviews, among other functions. District offices report to OFO and are located in at least one city for each state. The primary functions of these offices are (1) to market all SBA programs and services such as the aforementioned contracting programs, entrepreneurial development programs, and programs that facilitate loans from lenders to small business or capital access; (2) to provide business development assistance to entrepreneurs and small business owners; and (3) to support compliance and oversight responsibilities across capital and economic development programs. They also have geographic-specific contracting compliance responsibilities for local businesses in their portfolio. Branch offices and Alternative Work Sites serve as an extension of district offices and are in areas where local business needs require an additional SBA presence. Regional offices report to OFO and are responsible for marketing SBA and its programs to businesses and local government. Regional offices provide oversight of all district offices in their region and are often located in the same physical location as a district office. OGCBD sets policy for SBA’s government contracting and 8(a) business development programs and coordinates with OFO to implement its programs in field offices. OGCBD creates policies for field staff implementing its programs that include defining district office staff responsibilities and identifying counseling procedures that govern how district staff are to service firms. OGCBD also coordinates with OFO through weekly management calls to exchange information and provide updates on changes to policies and procedures. OGCBD has also coordinated with OFO to evaluate and update position descriptions for staff in field offices implementing its programs, most recently in 2016. SBA’s Office of Policy, Planning and Liaison (OPPL) is responsible for implementing small business government contracting laws and policy through SBA regulations. Executive branch agencies involved in rule making, including SBA, have authority and responsibility for developing and issuing regulations to implement laws. Many laws, regulations, and executive actions govern the federal rule-making process, including the following: Administrative Procedure Act (APA): The APA was enacted in 1946 and established the basic framework of administrative law governing federal agency action, including rule making. The APA governs “notice-and-comment” rule making, also referred to as “informal” or “APA rule making.” This act generally requires (1) publication of a notice of proposed rule making, (2) opportunity for public participation in the rule making by submission of written comments, and (3) publication of a final rule and accompanying statement of basis and purpose not less than 30 days before the rule’s effective date. Congresses and presidents have taken a number of actions to refine and reform this regulatory process since the APA was enacted. Executive Order 12866. Under Executive Order 12866, the Office of Information and Regulatory Affairs (OIRA), within OMB, reviews agencies’ significant regulatory actions (including both proposed and final rules) and is generally required to complete its review within 90 days after an agency formally submits a draft regulation. Each agency is to provide OIRA a list of its planned regulatory actions, indicating those that the agency believes are significant. For each rule identified by the agency as, or determined by the Administrator of OIRA to be, a significant regulatory action, the agency submits the rule to OIRA for formal review—including the coordination of interagency review. After receipt of this list, the Administrator of OIRA may also notify the agency that OIRA has determined that a planned regulation is a significant regulatory action within the meaning of the executive order. The order defines significant regulatory actions as those that are likely to result in a rule that may: 1. have an annual effect on the economy of $100 million or more or adversely affect in a material way the economy; a sector of the economy; productivity; competition; jobs; the environment; public health or safety; or state, local, or tribal governments or communities; 2. create a serious inconsistency or otherwise interfere with an action taken or planned by another agency; 3. materially alter the budgetary effect of entitlements, grants, user fees, or loan programs or the rights and obligations of recipients thereof; or 4. raise novel legal or policy issues arising out of legal mandates, the President’s priorities, or the principles set forth in Executive Order 12866. Federal Acquisition Regulation (FAR). Certain acquisition regulations must go through a separate OMB process after the final rule has been published before being added to the FAR. The FAR is a regulation that generally governs acquisitions of goods and services by executive branch agencies. It addresses various aspects of the acquisition process, from acquisition planning to contract formation to contract management. Part 19 of the FAR governs small business contracting programs. Federal Register notices proposing or announcing amendments to the FAR are generally issued jointly by the Department of Defense, General Services Administration, and National Aeronautics and Space Administration (NASA), though these items typically receive the concurrence of OMB’s FAR Council. After receiving a memorandum from an agency proposing to amend the FAR, the FAR Council refers potential changes to standing FAR teams for review. The process of amending the FAR can take anywhere from months to years. There are three phases in the federal rule-making process: initiation of rule-making actions, development of proposed rules, and development of final rules. During the initiation phase agency officials identify sources of potential rule makings. Potential rule makings may result from statutory requirements or issues identified through external sources (for example, public hearings or petitions from the regulated community) or internal sources (for example, management agendas). During this phase, agencies gather information that would allow them to determine whether a rule making is needed and to identify potential regulatory options. The second phase of the rule-making process starts when an agency begins developing the proposed rule. During this phase, the agency drafts the rule and begins to address analytical and procedural requirements. Also built into this phase are opportunities for internal and external deliberations and reviews, including official management approval. OIRA may be involved informally at any point during the process. After OIRA completes its review and the agency incorporates resulting changes, the agency publishes the proposed rule in the Federal Register for public comments. In the third phase of the process, the development of the final rule, the agency receives and reviews public comments, finalizes the language, and sends the rule through internal and external agency reviews, among other things. Once the comment period closes, the agency responds to the comments either by modifying the rule to incorporate the comments or by otherwise addressing the comments in the final rule. This phase also includes opportunities for internal and external review. Again, if the agency determines that the rule is significant or at OIRA’s request, the agency submits the rule to OIRA for review before final publication. If OIRA’s review results in a change to the final rule, the agency revises the rule before publication. After all changes are made, the final rule as published in the Federal Register includes the date that the rule becomes effective. An agency has certain options to expedite the rule-making process, and Congress has the ability to compel agencies to take action on a rule making if it believes there have been unreasonable delays. The APA includes exceptions to notice and comment procedures for certain categories of rules, such as those dealing with military or foreign affairs and agency management or personnel. Further, APA requirements to publish a proposed rule generally do not apply when an agency finds, for “good cause,” that those procedures are “impracticable, unnecessary, or contrary to the public interest.” Agencies often invoke “good cause,” for example, when Congress prescribes the content of a rule by law, such that prior notice and public comment could not influence the agency’s action and would serve no useful function. If an agency finds that notice and comment would be “impracticable, unnecessary, or contrary to the public interest,” the agency may issue a rule without prior notice and comment and instead solicit public comments after the rule has been promulgated. The agency may then choose to revise the rule in light of these post-promulgation comments. An agency also has the option of issuing an “interim final rule” to expedite the rule-making process. Other sources of exceptions to notice-and-comment rule making exist, such as specific statutory provisions that may direct agencies to expedite issuance of final rules. While agencies could be compelled to take action if they have “unreasonably delayed” a regulation or FAR amendment, Congress has seldom, if ever, compelled an agency to do so. OGCBD headquarters sets policies for SBA’s business development and government contracting programs, and SBA staff in field offices and other locations help to implement these programs at the local level. These field staff perform a variety of activities, depending on the program they are supporting. The reporting relationships between field staff and SBA headquarters also vary depending on the program. For example, field staff who implement government contracting programs report to OGCBD, while staff who manage the local portfolio-driven 8(a) business development program report to OFO, which oversees the field offices. Prime contracts and subcontracting assistance programs. At the headquarters level, the Office of Government Contracting within OGCBD manages SBA’s prime contracts, subcontracting assistance, certificate of competency, and size determination programs, which are implemented by staff who report to six area offices across the country. The Office of Government Contracting oversees the implementation of these programs by area offices and monitors the performance of and develops training for the staff who implement these programs. In the field, staff known as procurement center representatives (PCR) implement SBA’s prime contracts program, and these staff report through area offices to OGCBD. As noted in SBA’s standard operating procedure (SOP) for the prime contracts program, PCRs work to help ensure that small businesses have a fair and equitable opportunity to compete for federal procurement opportunities and that a fair proportion of the total sales of federal government property is made to small business concerns. PCRs recommend the set-aside of selected acquisitions, recommend new qualified small business sources, appeal contracting officer’s decisions which they deem adverse to small business, and provide advice to large business concerns to facilitate maximum practicable subcontracting opportunities for the small business community. Staff known as commercial market representatives (CMR) implement SBA’s subcontracting assistance program; CMRs also report through area offices to OGCBD. CMRs, among other things, work to facilitate the matching of large prime contractors with small business concerns, counsel large prime contractors on their responsibilities to maximize subcontracting opportunities for small business concerns, and counsel small business concerns on how to market themselves to large prime contractors. Staff known as Industrial Specialists are assigned to manage the Certificate of Competency and the Size Protest determination program cases. Certificate of Competency’s Industrial Specialists analyze the responsibility and capability of small businesses that have been tentatively selected for a contract, to help ensure that any of the contracting officer’s concerns about the firm’s ability to successfully perform can be overcome. Size Industrial Specialists analyze protests of awards when there is a question as to whether the recipient is in fact a small business. Both of these decisional responsibilities affect the awarding of contracts to individual small businesses. 8(a) Business Development program. At the headquarters level, the Office of Business Development within OGCBD is responsible for administering services available through the 8(a) Business Development program by issuing program policy and plans, evaluating program implementation, and rendering final decisions on program eligibility, among other responsibilities. The Office of Business Development is comprised of three departments, which collectively support the 8(a) program. The Office of Certification and Eligibility (OCE) has staff at both headquarters and two field offices who perform similar activities. OCE staff process initial certifications of eligibility for the 8(a) program and conduct continuing eligibility reviews for firms deemed to be high risk or complex, among other duties. OCE staff in field offices report to OGCBD via OCE. The Office of Management and Technical Assistance administers most services provided to 8(a) participants that are not provided by the district offices, such as administering the 8(a) Mentor-Protégé program, servicing sole-source, competitive, and multiple award contracts; analyzing and processing termination waivers; reaching out to prime contractors, federal agencies, and the 8(a) business development community; and overseeing the execution of national and local seminars and conferences, among other things. The Office of Program Review supports headquarters and field office staff administering the program by evaluating and responding to external reviews, creating marketing products for the 8(a) program, and preparing the annual report to Congress on program participation and contracting, among other things. At the local level, about 160 district office staff members known as Business Opportunity Specialists support the 8(a) program by interacting directly with small businesses. Business Opportunity Specialists are responsible for implementing the 8(a) program within the geographical area serviced by their district office, and each specialist has a portfolio of firms that they are responsible for supporting throughout the firms’ participation in the 8(a) program. Their activities include assisting firms as they prepare to apply to the program, hosting webinars about SBA’s government contracting and business development programs, and conducting training for firms on how to strengthen elements necessary for participation in these programs, such as a creating a strong business plan. Business Opportunity Specialists are also responsible for conducting annual reviews, which assess a firm’s progress in the 8(a) program. Further, they conduct continuing eligibility reviews, which help ensure that firms are still eligible to participate in the program after initial certification. In contrast with the field staff in area offices who implement SBA’s government contracting programs and report to OGCBD headquarters, Business Opportunity Specialists in district offices report to OFO via the district director, and their caseloads are determined by OFO. District directors manage the district offices and prepare a comprehensive District Office Strategic Plan outlining the methodology to achieve or exceed district goals by fiscal year end. The plan is specific to the district’s economic climate and encompasses goals related to OGCBD programs. Business Opportunity Specialists are responsible for executing goals of their district office’s plan that are specific to their position. In addition to supporting OGCBD programs, Business Opportunity Specialists support other SBA programs and assist with district office administration and local market initiatives. According to agency officials, the time specialists spend working on their 8(a) portfolios ranges from 55 percent to 100 percent. As a result, specialists who do not support 8(a) full time may also support other OGCBD programs, as discussed in the following sections, and assist with other district office activities, such as developing a marketing and outreach plan specific to their district office. HUBZone program. At the headquarters level, the Office of HUBZone within OGCBD administers the HUBZone program by certifying businesses as eligible to receive HUBZone contracts, maintaining a list of qualified HUBZone small businesses that federal agencies can use to locate vendors, adjudicating protests of HUBZone eligibility, decertifying firms that no longer meet eligibility requirements, and conducting marketing outreach and training. In the field, each district office has a HUBZone liaison who serves as the program expert at the local level. Because Business Opportunity Specialists are responsible for marketing OGCBD’s other programs in addition to their 8(a) duties, some of them also work with or serve as the HUBZone liaison to help ensure that the HUBZone program is implemented according to internal operating procedures and statute and help ensure that relevant HUBZone program goals and objectives are accomplished. The HUBZone liaison is also responsible for completing site visits or program examinations for firms, and conducting program marketing outreach to and training for state and local acquisition, economic development, and small business communities. WOSB and SDVOSB programs. At the headquarters level, the Office of Government Contracting within OGCBD publishes regulations for the WOSB program, conducts eligibility examinations of businesses that have received contracts, decides protests related to eligibility for a WOSB contract, conducts studies to determine eligible industries, and works with other federal agencies in assisting participating firms. The Office of Government Contracting also conducts SDVOSB eligibility protest reviews to help ensure that only eligible SDVOSBs receive contracts set aside for this group. The Office of Policy, Planning and Liaison issues regulations for the SDVOSB program and reports progress on the program’s set-aside goals. Because both programs are currently functioning as self-certifying programs, in which firms attest to their own eligibility to participate or obtain third-party certification, OGCBD does not make any determinations regarding firms’ eligibility prior to firms’ receiving contract awards. In the field, Business Opportunity Specialist responsibilities for these programs are largely limited to marketing these programs to the community and working with local resource partners, such as women’s business centers and veteran’s business centers, to educate firms and contractors about the programs. Figure 1 illustrates the lines of reporting for field staff who implement SBA’s government contracting and business development programs. SBA officials we spoke to in OFO and OGCBD described benefits of the current field-office and reporting structure. For example, they told us that the current field-office structure provides a national presence that allows firms to engage with staff in district offices across the country. As previously mentioned, at least one district office is located in each state, with multiple offices in some states. In addition, OFO officials said the reporting structure, in which Business Opportunity Specialists who implement the 8(a) program report to OFO rather than to OGCBD, allows for staff to also support the goals of their district office, which may require them to support local market duties and other SBA programs, in addition to supporting the firms in their 8(a) portfolio. Finally, SBA officials stated that the current structure helps to ensure that staff know their local market and can be responsive to local market needs as determined by their district director. However, OGCBD officials told us that the current reporting structure can result in inconsistent program delivery for business development programs. They described efforts taken recently to improve program delivery by improving OGCBD’s communication with OFO and field staff, including the following: Weekly management calls now occur between headquarters-level staff from OGCBD and OFO. These calls mostly address policy changes or changes to OGCBD’s standard operating procedures. Monthly conference calls including Business Opportunity Specialists and OGCBD management have been instituted to address any updates to the program. Monthly training refresher calls sponsored by the Office of Business Development have been implemented to provide training refreshers in addition to an opportunity to discuss program concerns or suggestions. Monthly HUBZone calls occur to monitor site visits and discuss complex fact patterns that may arise in connection with eligibility compliance. Business Opportunity Specialists were invited to attend a Department of Defense government contracting training session alongside OGCBD staff. However, information from SBA’s 2015 and 2016 Field Accountability Reviews indicate that communication issues may be ongoing. For example, one deputy district director said that conference calls are confusing, lack consistency, and do not provide up-to-date process changes. This deputy district director also noted that the calls did not cover all OGCBD programs and said that district field office staff were unaware of changes to WOSB and SDVOSB programs. Another district director said that communication breakdowns can occur when program offices schedule webinars, conference calls, and training activities that conflict with one another. Because the communications changes were implemented recently, it may be too soon to tell if they are having the intended effect. In September 2015, we issued a report that was based on a broad review of management challenges at SBA, including OGCBD. In this 2015 report, we found that working relationships between headquarters and field offices that differ from reporting relationships can potentially pose programmatic challenges. At that time, SBA told us it had committed to assessing its organizational structure but had not yet completed those efforts. We recommended that SBA document the assessment of the agency’s organizational structure, including any necessary changes to, for example, better ensure that areas of authority, responsibility, and lines of reporting are clear and defined. As of May 2017, SBA had not provided documentation of such an assessment or of its decision making about the need for changes to its organizational structure. We maintain that such an assessment is needed to help ensure that SBA’s structure supports its mission efficiently and effectively. Over the past decade, we and SBA’s OIG have identified a number of weaknesses in the processes SBA uses to certify and recertify businesses as being eligible to participate in its HUBZone, 8(a), and WOSB programs and have made recommendations to SBA to address them. SBA has addressed a number of these recommendations; however, some remain outstanding. SBA has made some improvements to address problems we identified with the HUBZone program’s certification and recertification processes. For example, in June 2008 we reported that, for its HUBZone certification process, SBA relied on data that firms entered in the online application system and performed limited verification of the self-reported information. Although agency staff had the discretion to request additional supporting documentation, SBA did not have specific guidance or criteria for such requests. Consequently, we recommended that SBA develop and implement guidance to more routinely and consistently obtain supporting documentation upon application. In response to that recommendation, SBA revised its certification process, and since 2009 has required firms to provide documentation, which SBA officials review to determine the firms’ eligibility for the HUBZone program. SBA then performs a full-document review on all applications as part of its initial certification process to determine firms’ eligibility for the program. We have closed this recommendation as implemented. We have also identified a number of concerns with SBA’s HUBZone recertification process. For example, in February 2015 we reported that SBA relied on firms’ attestations of continued eligibility and generally did not request supporting documentation as part of the recertification process. SBA only required firms to submit a notarized recertification form stating that their eligibility information was accurate. SBA officials did not believe they needed to request supporting documentation from recertifying firms because all firms in the program had undergone a full document review, either at initial application or during SBA’s review of its legacy portfolio in fiscal years 2010–2012. However, as we found, the characteristics of firms and the status of HUBZone areas—the bases for program eligibility—often can change and need to be monitored. As a result, we concluded that SBA lacked reasonable assurance that only qualified firms were allowed to continue in the HUBZone program and receive preferential contracting treatment. We recommended that SBA reassess the recertification process and implement additional controls, such as developing criteria and guidance on using a risk-based approach to requesting and verifying firm information. In following up on this recommendation for our March 2016 report on opportunities to improve HUBZone oversight, we found that SBA had not yet implemented guidance for when to request supporting documents) for the recertification process because SBA officials believed that any potential risk of fraud would be mitigated by site visits to firms. According to data that SBA provided, the agency visited a fraction of certified firms each year during fiscal years 2013 through 2015. SBA’s reliance on site visits alone did not mitigate the recertification weaknesses that were the basis for our recommendation. The officials also cited resource limitations. In recognition of SBA’s resource constraints, we reiterated in our March 2016 report that SBA could apply a risk-based approach to its recertification process to review and verify information from firms that appear to pose the most risk to the program. In addition, as of February 2017, SBA officials told us that the agency had begun implementing a technology-based solution to address some of the ongoing challenges with the recertification process. The officials expected that the new solution would help them better assess firms and implement risk-based controls by the end of calendar year 2017. As of May 2017, this recommendation remains open. We also found in June 2008 and again in February 2015 that the recertification process was backlogged—that is, firms were not being recertified within the required 3-year time frame. In 2015, we reported that as of September 2014, SBA was recertifying firms that had been first certified 4 years previously. While SBA initially eliminated the backlog following our 2008 report, according to SBA officials the backlog recurred due to limitations with the program’s computer system and resource constraints. Consequently, in February 2015 we again recommended that SBA take steps to ensure that significant backlogs would not recur. In response to the recommendation, SBA made some changes to its recertification process. For example, instead of manually identifying firms for recertification twice a year, SBA automated the notification process, enabling notices to be sent daily to firms (to respond to and attest that they continued to meet the eligibility requirements for the program). According to SBA officials, as of February 2017 this change had not yet eliminated the backlog. SBA has made improvements to address problems we identified with the 8(a) program’s process to help ensure firms’ continuing eligibility. In a March 2010 report, we made six recommendations to improve SBA’s monitoring of and procedures used in assessing the continuing eligibility of firms to participate in and benefit from the 8(a) program. SBA has taken steps to address the six recommendations, and we have closed all six as implemented. For example, we recommended that SBA monitor and provide additional guidance and training to district offices on the procedures used to determine continuing eligibility. In response to this recommendation, SBA issued revised regulations that provided additional 8(a) program eligibility requirements and criteria related to size standards, indicators of economic disadvantage, and other thresholds businesses must meet to maintain eligibility. In addition, SBA indicated that under its Field Accountability Review program it conducts oversight of SBA district offices using audit-like steps to measure performance and compliance regarding federal statutory mandates, regulations, and SBA policy and procedures. According to SBA, one of the areas covered by the Field Accountability Review on-site visits is the 8(a) annual compliance reviews of participating firms. In April 2016, SBA’s OIG reported that SBA failed to properly document that 8(a) firms admitted into the program met all eligibility criteria. SBA’s OIG evaluated SBA’s eligibility determination process for admitting 48 applicants in the 8(a) program between January 1, 2015, and May 31, 2015, and found that 30 of the participants did not meet all of the eligibility criteria. SBA’s OIG found that SBA managers had overturned lower-level reviewers’ recommendations for denial without fully documenting how all of the identified areas of eligibility concerns were resolved. SBA’s OIG recommended that SBA (1) clearly document its justification for approving or denying applicants into the 8(a) program, particularly when those decisions differed from lower-level recommendations, and (2) provide documentation showing how eligibility concerns raised by lower-level reviewers were resolved for the 30 firms not documented. In response to the first recommendation, SBA noted in a written response to us that it had established a practice of noting a statement of difference in cases where decisions differed; however, the SBA OIG had yet to close this recommendation as of May 2017. According to the SBA OIG, this recommendation will remain open until this practice is documented in an SOP or desk guide for the program. In response to the second recommendation, SBA’s OIG noted that SBA provided the SBA OIG with documentation showing how the eligibility concerns were resolved for the 30 firms not documented, and this recommendation was closed as implemented. SBA OIG officials told us that they plan to issue a report that summarizes their analysis of the documentation provided by SBA in June 2017. SBA considers WOSB a self-certification program because firms self- certify their eligibility to participate by uploading documentation into an online repository or seeking approval from a third-party certifier. In October 2014, we found that SBA performed minimal oversight of third- party certifiers for the WOSB program and had not developed procedures that provide reasonable assurance that only eligible businesses obtain WOSB set-aside contracts. As a result, we found that SBA cannot provide reasonable assurance that certifiers fulfill the requirements of their role and that firms that attest that they are eligible for the program are actually eligible. We made two recommendations in this report: SBA should establish and implement comprehensive procedures to monitor and assess performance of certifiers in accord with the requirements of the third-party certifier agreement and program regulations; and SBA should enhance examination of businesses that register to participate in the WOSB program, including actions such as developing and implementing procedures to conduct annual eligibility examinations, analyzing examination results and individual businesses found to be ineligible to better understand the cause of the high rate of ineligibility in annual reviews, and implementing ongoing reviews of a sample of all businesses that have represented their eligibility to participate in the program. In response to our recommendations, SBA has taken some actions. For example, SBA created an SOP stating that third-party certifiers are subject to a compliance review by SBA at any time, and SBA has completed a review of the four authorized third-party certifiers. We continue to monitor SBA actions to address our recommendations. SBA’s OIG has also identified weaknesses in the WOSB program. In May 2015, SBA’s OIG reported that contract awards were made to potentially ineligible firms based on documentation in the WOSB online repository. SBA’s OIG reviewed 34 contract awards and found that 9 did not have documentation in the repository. In addition, SBA’s OIG found that of the 25 awards that did have some documentation in the repository, a number did not include all of the required documentation or sufficient documentation to prove that the firm was controlled by women. SBA’s OIG recommended that SBA perform eligibility examinations on the firms identified in the report as potentially ineligible. According to SBA OIG officials, SBA completed the eligibility examinations on the firms identified as potentially ineligible and determined that 40 percent of these firms were not eligible to receive contracts under the WOSB program at the time of award. According to the SBA OIG, all recommendations from this report were closed as implemented. The National Defense Authorization Act for Fiscal Year 2015 eliminated the self-certification process for the WOSB program and required SBA to give more authority to contracting officers to award sole-source contracts—that is, contracts that do not require competition. SBA completed a rule-making process to allow the program to award sole- source contracts. Although SBA has provided an advanced notice of proposed rule making for the certification program, it has not implemented a process to eliminate self-certification as of May 2017. As a result of inadequate monitoring and controls, such as not implementing a full certification program, potentially ineligible businesses may continue to incorrectly certify themselves as WOSBs, increasing the risk that they may receive contracts for which they are not eligible. Even with this change in the NDAA, we maintain that recommendations related to strengthening oversight of third-party certifiers and enhancing examinations of WOSB firms are needed to help ensure that only eligible businesses participate in the WOSB program. The timeliness of SBA’s rule-making process can vary due to the legal requirements that govern this process, among other factors. While agencies must adhere to the federal laws and executive actions that govern the federal rule-making process, each agency also has its own guidance and process for rule making. SBA relies on two SOP documents that outline procedures and responsibilities for rule making at the agency. One SOP on Federal Register documents identifies the procedures and responsibilities for obtaining internal clearance (the agreement of various offices within SBA and ultimately the signature of the Administrator) before publishing documents to the Federal Register, and includes details on how to format Federal Register proposed and final rules and the offices involved in reviewing documents. This SOP does not include specific information on required timelines for this process. The other SOP on SBA’s Office of Executive Secretariat includes, in part, additional information on clearance procedures before documents can be published in the Federal Register. This SOP includes limited information on internal deadlines, including that documents must be cleared by this office within 15 days of being initiated in SBA’s internal tracking system, with any documents needing re-clearance requiring an additional 5 days. SBA’s Office of Policy, Planning and Liaison (OPPL) works with SBA’s Office of General Counsel (OGC) and other internal subject-matter experts to draft and promulgate rules. OPPL has one director and two other staff members dedicated to rule making, and one of the two staff members is solely responsible for working with OMB’s FAR Council. SBA officials described their rule-making process as follows: OPPL relies on staff from OGC to draft the rule and then prepares the rule for OGCBD clearance and ultimately for agency-wide clearance by the Administrator. After receiving comments from SBA’s Office of Advocacy, SBA’s OIG, and other offices, OPPL prepares a memorandum to the Administrator for the Administrator’s review and clearance. Then, if the rule is determined by OIRA to be a significant regulatory action as defined by Executive Order 12866, the rule must go to OMB for an interagency review process managed by OMB, in which other federal agencies can provide comments and questions on SBA’s rule. This interagency review period requires 90 days, but the actual amount of time for this review varies. Sometimes the rule may be sent back to SBA where the process starts over again. According to SBA officials, if other agencies have no comments, the interagency review period can take 4 to 5 months. After the proposed rule passes interagency review, it goes back through OGCBD clearance and agency-wide clearance by the Administrator before being added to the Federal Register for public comment. OGC summarizes the public comments and drafts the final rule, and the final rule goes back to OGCBD and then the SBA Administrator for review before it is again sent to OMB for an additional review process. After these reviews are completed, the rule is then published in the Federal Register. Rules that only apply to SBA (and that do not need to go to OMB to amend the FAR) have an effective date 30 days after issuance. For rules that amend the FAR, a statement is drafted by OPPL and the FAR team drafts proposed and final rules. It takes the FAR Council at least a year between proposed and final rule to complete a FAR amendment, according to SBA staff. For SBA and the federal government more broadly, certain stages of the rule-making process have mandated time periods, as shown in figure 2. For example, the public comment period recommended by Executive Order 12866 is 60 days. In addition, the interagency comment period managed by OMB requires 90 days, and this review can occur prior to the publishing of both the proposed rule and the final rule. Other stages have no time requirements but also add to the overall length of the process, such as the time required to research, analyze, and draft a proposed rule. Timelines for promulgating rules varied across four finalized SBA rules we reviewed. We selected four statutory provisions requiring SBA to promulgate rules from the NDAAs for fiscal years 2013, 2014, 2015, and 2016 (out of a possible 47 provisions requiring rule making) for review to better understand SBA’s rule-making process. All four of these rules have been finalized by SBA. Table 1 identifies the four rules we reviewed and some basic time frames for each rule. For these four rules, the rule-making process resulted in longer time frames than the required minimums along several metrics. Interagency Review. Three of the four rules we reviewed were identified as requiring the OMB interagency review process, which lasted longer than 90 days in some cases. Of these three rules, one (the Lower Tier Subcontracting rule) was under review with OMB for less than the required 90 day review period (86 days); the other two required 107 and 156 days for this review process. Further, these three rules also underwent an additional interagency review process after SBA had obtained public comments, which required an additional 68 to 75 days each. Public comment period. In addition, SBA officials noted that the public comment period varied in some cases, with extended comment periods being added as necessary. For example, for the All Small Mentor-Protégé rule, SBA initially provided a 60-day comment period, but extended it by an additional 30 days in response to public request. Likewise, for the Limitations on Subcontracting rule, SBA reopened the initial 60-day comment period for an additional 30 days starting about a week after the initial comment period ended. Statutory deadlines. Finally, three of the four rules arose from statutes that required the final rule to be issued within a set period of time. Both the All Small Mentor-Protégé rule and the Advisory Size Decisions rule were required to be issued as final within 270 days of the law’s enactment, while the Lower Tier Subcontracting rule required a final rule to be issued by 18 months after the date of enactment. The All Small Mentor-Protégé rule took 1,300 days from the date of the law’s enactment to the issuance of the final rule, which was 1,030 days past the statutory deadline. The Advisory Size Decisions rule took 770 days, which was 500 days past the statutory deadline. The Lower Tier Subcontracting rule took almost exactly 18 months longer than the statutory deadline. SBA officials noted some factors that may have contributed to certain rules taking longer than anticipated in recent years. They explained that the volume of rule making required of SBA has increased in recent years. Agency officials said they were used to receiving such legislation in stand-alone bills every couple of years until Congress began including rule-making requirements for SBA in NDAAs in fiscal year 2013. In written responses to our questions on the four selected rules, SBA officials noted that the three rules required by NDAA for Fiscal Year 2013 came at a time when they were busy completing rules required by the Small Business Jobs Act of 2010, thereby delaying the start of their work on the new set of rules. In a congressional testimony in February 2016, the Associate Administrator of OGCBD stated that SBA had implemented over 25 provisions from the Small Business Jobs Act of 2010 and was making progress on the remaining provisions. OMB officials stated that the timeliness of SBA’s rule makings is not unusual and has not raised any concerns. Additionally, SBA officials noted that some of the rules contained other statutory requirements that required additional work. Specifically, the All Small Mentor-Protégé program and the Limitations on Subcontracting rules included changes that affected Indian tribes, Alaska Native corporations, and Native Hawaiian organizations, which required SBA to consult with these groups in accordance with Executive Order 13175. Also, the Lower Tier Subcontracting rule required SBA, the General Services Administration, and the Department of Defense to submit a plan for implementing the rule to both House and Senate committees; the agencies were required to complete planned actions within 1 year after enactment, and SBA was required to issue any regulations necessary, including the completion of a FAR amendment, within 18 months after enactment. However, SBA officials said that the FAR Council generally will not open a FAR case until SBA has issued a final rule, making the accomplishment of this statutory deadline impracticable. SBA officials also cited some delays in rule making as a result of the recent presidential administration transition. Generalizing about time frames in the rule-making process is difficult because the process varies from rule to rule. In an April 2009 report on the effect of procedural and analytical requirements on federal agencies’ rule-making processes, we found variation in the length of time required for the development and issuance of final rules, both within and across agencies. We identified several factors, including the complexity of the issues addressed; priorities set by agency management that can change; and the amount of internal and external review required. Additionally, SBA officials noted that some rules receive many more comments than others, which can add significantly to the timeline. Various approaches exist for measuring the length of time required to develop and issue final rules, but they have limitations. Initiation to final publication. The most complete measure of the length of time for a rule making is to measure the period from initiation of the rule to final publication, but this approach is limited by disagreement as to when a rule-making process begins. According to our prior work, while agency officials generally agreed that the publication of a final rule marked the end of the rule-making process, identifying when a rule making begins is less definite. Specifically, while each agency identifies milestones that mark the initiation of a rule making, they may not factor in the time spent researching the rule makings or developing policy for the rule, as well as time spent researching rule makings related to the rule in question. Publication in Federal Register to final publication. Another approach to measuring the time required for a rule making is to use two rule-making milestones common among federal agencies: (1) publication of a proposed rule in the Federal Register and (2) publication of a final rule. However, this measurement is incomplete, as it ignores the potentially substantial length of time necessary for regulatory development, according to our April 2009 report. In that report, our case study of 16 rules suggested that this time frame ranged from approximately 6 months to 5 years, while the total rule- making time for the two rules on either end of that range varied from slightly over 1 year to 13 years, respectively. For our current review of four selected SBA rules, the time between the publication of proposed and final rules ranged from 7.5 months to 17.5 months. Mandated timeline. Finally, another approach is to evaluate each rule against its mandated timeline, although this requirement does not exist for all rules. However, the various factors that can affect rule- making timeliness can limit the meaningfulness of this analysis. For example, although our April 2009 report found that rules that are a management priority or that have a statutory or judicial deadline may move more quickly through the rule-making process while other rules are set aside, this analysis must factor in the overall volume of required rule makings and the relative priorities and rule-making caseload for the agency. We are not making new recommendations in this report and maintain that SBA should implement our prior reports’ recommendations. We provided a draft of this report to SBA for review and comment. The agency provided technical comments that we incorporated as appropriate. We are sending copies of this report to the appropriate congressional committees, the Administrator of the Small Business Administration, and the Director of the Office of Management and Budget. The report is also available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-8678 or shearw@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Major contributors to this report are listed in appendix III. This report examined (1) the field-office and reporting structure the Small Business Administration (SBA) uses to implement government contracting and business development and the benefits and challenges posed by these structures; (2) progress SBA has made to strengthen its certification processes; and (3) the timeliness of SBA’s rule-making process. To examine SBA’s field-office and reporting structure for implementing its government contracting and business development programs, we reviewed SBA documentation on its organizational structure. In addition, we obtained and reviewed a March 2015 study on SBA’s organizational structure conducted by a third-party consultant. We also reviewed academic literature on organizational theory to provide context for understanding SBA’s organizational structure and leading practices for implementing changes to organizational structure. We conducted this literature search on organizational structure and theory and reviewed these articles to determine the extent to which they were relevant to our engagement and appropriate as evidence for our purposes. We also observed two online webinars hosted by SBA on government contracting for small businesses to better understand SBA’s communications to firms about its government contracting services. Further, we reviewed prior GAO and SBA Office of Inspector General (OIG) reports from 2008 through 2016 for findings related to SBA’s organizational structure and the benefits and challenges posed by its current structure. In addition, we interviewed SBA staff from the following headquarters offices: Office of Government Contracting and Business Development, Office of Certification Eligibility, Office of Field Operations, Office of the Chief Human Capital Officer, and Office of Policy, Planning, and Liaison, as well as the Administrator’s Chief of Staff. We interviewed SBA staff to obtain their perspectives on SBA’s current organizational structure with respect to government contracting and business development programming. To examine the progress SBA has made to strengthen its processes for certifying small businesses as eligible to participate in its programs, we reviewed relevant laws, regulations, and agency guidance. We specifically examined SBA’s certification processes for its 8(a) Business Development and Historically Underutilized Business Zone (HUBZone) programs, as well as its self-certification processes for the Women- Owned Small Business (WOSB) and Service-Disabled Veteran-Owned Small Business (SDVOSB) programs. We also interviewed SBA headquarters staff to understand these different certification processes and to obtain their perspective on the progress that has been made to strengthen these processes. In addition, we reviewed prior GAO and SBA OIG work related to SBA’s certification processes to identify progress that has been made as well as opportunities to further strengthen these processes. See appendix II for more information on the status of selected prior GAO recommendations to SBA. To examine the timeliness of SBA’s rule making, we reviewed relevant laws, regulations, executive actions, and SBA guidance. We also reviewed four statutorily mandated SBA rules, selected from a possible 47 provisions in the National Defense Authorization Acts of fiscal years 2013, 2014, 2015, and 2016 that potentially required SBA to draft and implement rules. We selected these rules as examples of mandatory rule making. We reviewed the public documentation for each rule, including any proposed or final rules, as well as the timelines associated with each rule. We also interviewed SBA staff and staff from the Federal Acquisition Regulatory (FAR) Council within the Office of Management and Budget (OMB) to understand SBA’s regulatory drafting process, the Federal Acquisition Regulation (FAR) process, and the coordination between SBA and the FAR Council. Finally, we reviewed prior GAO reports on rule making to understand the federal rule-making process and factors affecting the timeliness of agency rule making. We conducted this performance audit from August 2016 to June 2017 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. The following table summarizes the status of our recommendations from HUBZone, 8(a), and WOSB performance audits and investigations as of May 2017. We classify each recommendation as either open (the agency has not taken steps to implement the closed, not implemented (the agency decided not to take action to implement the recommendation). The recommendations are listed by report. In addition to the individual named above, Marshall Hamlett (Assistant Director), Nathan Gottfried (Analyst In Charge), JoAnna Berry, Tim Bober, Farrah Graham, Jennifer Kamara, Jessica Sandler, Jennifer Schwartz, and Jena Sinkfield made major contributions to this report.
SBA's OGCBD administers a business development program and further promotes small business participation in federal contracting through a variety of other programs. A House Committee Report accompanying the National Defense Authorization Act for Fiscal Year 2017 included a provision for GAO to examine the operations of SBA's OGCBD. GAO examined (1) the field-office and reporting structure OGCBD uses to implement its government contracting and business development programs, (2) progress OGCBD has made to strengthen its processes for certifying small businesses as eligible to participate in its programs, and (3) the timeliness of SBA's rule-making process. GAO reviewed documentation related to SBA's organizational structure and certification processes; relevant laws and regulations; SBA program guidance; and previous GAO reports. GAO interviewed SBA and OMB officials. GAO reviewed four statutorily mandated SBA rules, which were selected from 47 provisions in the National Defense Authorization Acts for fiscal years 2013, 2014, 2015, and 2016 as examples of mandatory rule making. GAO makes no new recommendations in this report, and maintains that SBA should implement prior recommendations. SBA's technical comments on GAO's draft report are incorporated as appropriate. The Office of Government Contracting and Business Development (OGCBD) at Small Business Administration (SBA) headquarters sets policies for SBA's business development and government contracting programs, and SBA field office staff help to implement these programs at the local level. The reporting relationships between field staff and SBA headquarters vary depending on the program. For example, field staff who implement government contracting programs report to OGCBD, while most staff who implement the 8(a) business development program report to the Office of Field Operations (OFO), which oversees SBA's field offices. SBA officials told GAO that this reporting structure, in which some field staff implement OGCBD programs but report to OFO, offers some benefits—for example, it allows these staff to support the goals of OGCBD programs as well as those of the individual field offices. However, officials also said the reporting structure can result in inconsistent program delivery. They described recent steps to improve communication between OGCBD and field staff, but it is too soon to tell if these steps will be effective. SBA has taken some steps to address weaknesses GAO and the SBA Office of Inspector General (OIG) have identified in its processes for certifying small businesses as eligible to participate in SBA programs, but some recommendations remain open. For example, GAO found in 2015 that SBA had not required firms seeking recertification for the Historically Underutilized Business Zone (HUBZone) program to submit any information to verify continued eligibility and instead relied on firms' attestations of continued eligibility. GAO recommended that SBA assess the HUBZone recertification process and add additional controls; SBA had not yet implemented this recommendation as of May 2017. SBA's OIG also found in 2016 that SBA managers overturned lower-level reviewers' decisions to deny firms admission to the 8(a) program without documenting in the information system how eligibility concerns were resolved. SBA's OIG recommended that SBA clearly document the justification for approving or denying firms. In response, SBA stated that managers are now required to document decisions in the system that differ from those of lower-level reviewers. A number of legal requirements and the volume of required rule makings, among other factors, affect the timeliness of SBA's rule-making process. Certain stages of the rule-making process have mandated time periods, such as the required 90-day interagency review process for certain rules. Various approaches exist for measuring the length of time required to develop and issue final rules, but they have limitations. For example, in measuring the period from rule initiation to final publication, agencies may differ on when they mark initiation. For four finalized SBA rules GAO reviewed, the time from publication of the proposed rule to publication of the final rule varied from 7.5 months to 17.5 months. SBA officials noted that an increase in the number of statutorily mandated rules in recent years has contributed to delays in the agency's ability to promulgate rules in a more timely fashion. Office of Management and Budget (OMB) officials GAO spoke with stated that the length of time for SBA's rule makings is not unusual and has not raised any concerns.
With the National and Community Service Trust Act of 1993 (P.L. 103-82), the Congress created the largest national and community service program since the Civilian Conservation Corps of the 1930s. AmeriCorps*USA allows participants to earn education awards to help pay for postsecondary education in exchange for performing community service that matches priorities established by the Corporation. Participants earn an education award of $4,725 for full-time service or half of that amount for part-time service. A minimum of 1,700 hours of service within a year is required to earn the full $4,725 award. The Corporation requires that programs devote some portion, but no more than 20 percent, of participants’ service hours to nondirect service activities, such as training or studying for the equivalent of a high school diploma. To earn a part-time award, a participant must perform 900 hours of community service within 2 years (or within 3 years in the case of participants who are full-time college students). Individuals can serve more than two terms; however, they can only receive two education awards. The awards, which are held in trust by the U.S. Treasury, are paid directly to qualified postsecondary institutions or student loan lenders and must be used within 7 years after service is completed. In addition to the education award, AmeriCorps*USA participants receive a living allowance stipend that is at least equal to, but no more than double, the average annual living allowance received by Volunteers in Service to America (VISTA) participants—about $7,640 for full-time participants in fiscal year 1994. Additional benefits include health insurance and child care assistance for participants who need them. Individuals can join a national service program before, during, or after postsecondary education. A participant must be a citizen, a national, or a lawful permanent resident of the United States. A participant must also be a high school graduate, agree to earn the equivalent of a high school diploma before receiving an education award, or be granted a waiver by the program. Selection of participants is not based on financial need. Corporation used about $149 million of its fiscal year 1994 appropriations to make about 300 grants to nonprofit organizations and federal, state, and local government agencies to operate AmeriCorps*USA programs. Grant recipients use grant funds to pay up to 85 percent of the cost of participants’ living allowances and benefits (up to 100 percent of child care expenses) and up to 75 percent of other program costs, including participant training, education, and uniforms; staff salaries, travel, transportation, supplies, and equipment; and program evaluation and administrative costs. Grants are based in part on the number of participants the program estimates it will enroll during the year. If participants leave the program during the year, the Corporation may either allow the program to redirect participant stipend and benefit funds to other program expenses or take back any unused portion of the grant. To ensure that federal Corporation dollars are used to leverage other resources for program support, grantees must also obtain support from non-Corporation sources to help pay for the program. This support, which can be cash or in-kind contributions, may come from other federal sources as well as state and local governments, and private sources. In-kind contributions include personnel to manage AmeriCorps*USA programs as well as to supervise and train participants; office facilities and supplies; and materials and equipment needed in the course of conducting national service projects. Consistent with AmeriCorps’s enacting legislation, some federal agencies received grants during the initial 2 program years to support AmeriCorps*USA participants who performed work furthering the agencies’ missions. Federal agency grantees could use their own resources in addition to the Corporation grant to integrate national service more fully into their mission work. the smallest share of resources, amounting to about 12 percent (or about $41 million). Most of the Corporation’s funding for AmeriCorps*USA projects went to providing operating grants and education awards. Of the Corporation’s funding, 61 percent financed operating grants. Slightly over one-quarter supported participants’ education awards, while the remainder went toward Corporation program management and administration. Most of the matching contributions AmeriCorps*USA programs received came from public as opposed to private sources. About 69 percent of all matching resources came from either a federal or a state or local government source, with the split between cash and in-kind contributions being about 43 percent (about $57 million) and 26 percent (about $34 million), respectively. The remaining 31 percent of matching resources were from private sources, with cash and in-kind contributions accounting for 17 percent (about $23 million) and 14 percent (about $18 million), respectively. In calculating resources available on a per-participant and per-service-hour basis (see table 1), we found that the average from all sources per AmeriCorps*USA participant was about $26,654 (excluding in-kind contributions from private sources). This amounted to about $16 per service hour or about $20 per direct service hour, assuming 20 percent of the 1,700 hours of total service was nondirect service time. These figures represent resources available for all program expenses and are not the equivalent of annual salaries or hourly wages for participants. National Service Programs: AmeriCorps*USA—First-Year Experience and Recent Program Initiatives Corporation for National and Community Service We calculated available resources per participant on a full-time-equivalent (FTE) basis. It is important not to equate our funding information with cost data. Because most AmeriCorps*USA programs were still implementing their first year of operations, actual cost could not be determined. Funding and in-kind contributions from sources other than the Corporation were reported to us in May 1995 as resources already received or those that program directors were certain of receiving by the end of their current operating year. Therefore, actual resource and expenditure levels could be higher or lower than indicated by the estimates reported to us. federal agency grantees had about $15,500 in cash and in-kind contributions available per participant from federal sources other than the Corporation. Non-Corporation federal funds accounted for about 50 percent of total resources available to federal grantees. Nonfederal AmeriCorps*USA grantees received resources of less than $800 per participant from non-Corporation federal sources, or about 3 percent of their total resources. The appendix contains more detailed program resource information by sponsoring agency. In its mission statement, the Corporation had identified several objectives that spanned a wide range of accomplishments, from very tangible results to those much harder to quantify. During our site visits, we observed local programs helping communities. AmeriCorps*USA has also sponsored an evaluation of its own that summarized results at a sample of programs during their first 5 months of operation and identified diverse achievements related to each service area. participants renovating inner-city housing, assisting teachers in elementary schools, maintaining and reestablishing native vegetation in a flood control area, analyzing neighborhood crime statistics to better target prevention measures, and developing a program in a community food bank for people with special dietary needs. AmeriCorps’s legislation identified renewing the spirit of community as an objective, and the program’s mission includes “strengthening the ties that bind us together as a people.” We observed several projects focused on rebuilding communities. For example, a multifamily house being renovated was formerly a congregating spot for drug dealers. Program officials believe that after completion, it will encourage other neighborhood improvements. Another team built a community farm market and renovated a municipal stadium, both of which a town official said will continue to provide economic and social benefits to the community. Another way to meet this objective was to have participants with diverse backgrounds working together. Participants of several programs we visited spanned a wide age range, from teenagers to retirees. Teams also showed diversity in educational, economic, and ethnic backgrounds. Participants said that a valuable aspect of the program was working with others with different backgrounds and benefiting from their strengths. Another of AmeriCorps*USA’s program objectives was to foster civic responsibility. We saw evidence of this at programs such as one where participants devoted half of each Friday to working on community service projects they devised and carried out independently. Participants at another program, in which they organized meetings to establish relationships between at-risk youth and elderly people, commented that this work had taught them how to organize programs, experience they believed would be helpful as they took on roles in their communities. Training periods included conflict resolution techniques and team-building skills. education or job training. At the sites we visited, participants indicated that the education award was an important part of their decision to participate in AmeriCorps*USA. Programs also supported participants in obtaining high school diplomas or the equivalent. According to Corporation regulations, a full-time participant who does not have a high school diploma or its equivalent generally must agree to earn one or the other before using the education award. In one program, a general equivalency diploma (GED) candidate was receiving classroom instruction and individual tutoring. She had recently passed the preliminary GED test after failing the GED test five times. After doing some extra preparation for the math portion, she will take the actual GED test again. A larger program that recruited at-risk youth, most of whom do not have high school diplomas, provided classroom instruction related to the service that participants performed, such as a construction-based math curriculum. Program officials said most of the participants are enrolled in high school equivalency courses and that at least five have already passed the GED test. We also saw programs that offer participants the chance to get postsecondary academic credit. One such program, affiliated with a private college, offered participants the option of pursuing an environmental studies curriculum through which they can earn up to six upper-level credits at a reduced tuition. Half of the participants have chosen to do so. A second program allowed participants to earn 36 credit hours toward an associate’s degree in the natural sciences through their service, which can lead to state certification as an environmental restoration technician. Since we reported on the program last October, both the Congress and the Corporation have implemented measures aimed at lowering AmeriCorps’s cost. On the legislative side, the Congress mandated new funding restrictions for the Corporation. On the programmatic side, the Corporation, after consulting with Members of Congress, has revised its grant guidelines. These new measures will only affect programs receiving grants for the upcoming 1996-97 program year. federal agencies ineligible to receive AmeriCorps grants. The law also requires that to the maximum extent possible, the Corporation (1) increase the amount of matching contributions provided by the private sector and (2) reduce the total federal cost per participant in AmeriCorps programs. As part of the fiscal year 1996 appropriations act, the Congress also mandated that GAO further study the Corporation’s operations. We expect to complete our study by the end of this fiscal year. In recent months, the Corporation has worked with Members of Congress to identify ways to reduce AmeriCorps’s program costs. Subsequently, the Corporation has revised its grant application guidelines for programs receiving funding in the upcoming 1996-97 program year. For example, in response to congressional concerns over the cost of mandating the purchase and use of uniforms, the AmeriCorps*USA uniform package (t-shirt, sweatshirt, button, and so on) is no longer a program requirement. The Corporation also has directed grantees exceeding a program year 1995-96 cost per participant of $13,800 to reduce their proposed program year 1996-97 per-participant costs by an overall average of 10 percent. The Corporation has also increased the grantee’s share of total program operating costs from 25 to 33 percent for grants awarded for the 1996-97 program year. The Corporation’s revised grant guidelines also seek to reduce costs by encouraging a program requesting increased funding to add additional participants, thereby reducing its cost per participant. The guidelines also encourage programs to seek additional funding only for education awards. Corporation’s words—“get things done.” Total resources available means many things. It means cash and in-kind contributions that pay participants’ living allowances, social security taxes, health insurance, child care, and the education awards they earn in exchange for their service. It means resources available to pay local program staff who manage operations and supervise staff; to pay rent for office space and purchase supplies; to pay for travel and transportation for program staff and participants; and to pay for materials needed to conduct national service projects. It means resources available to pay for planning grants used to design and formalize future national service programs. And it means resources available to pay for the staff and operations of the Corporation for National and Community Service. Our objective was not to draw conclusions about whether AmeriCorps*USA was cost-effective. Rather, it was to gather information on the total amount of resources available to AmeriCorps*USA programs nationwide and to provide this information by resource stream—that is, by federal, state, and local government and private sources. Though not precise cost data, this information illustrated the funding levels that may be needed to support new program endeavors of similar scale in the future. It also indicated the degree of partnership between the public and private sectors. Since we completed our review, the Congress and the Corporation have undertaken a number of measures that are intended to reduce the costs of AmeriCorps. Because many of these initiatives will not take effect until the upcoming 1996-97 program year, it is too early to determine their impact. Madam Chairman, that concludes my statement for the record. For more information about this testimony, please call Wayne B. Upshaw at (202) 512-7006 or Carol L. Patey at (617) 565-7575. Other major contributors to this testimony included C. Jeff Appel, Nancy K. Kintner-Meyer, and James W. Spaulding. Corporation award (adjusted) $12,071,004 $31,881,332 $3,470,008 $2,333,452 (Table notes on next page) The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
GAO discussed the Corporation for National and Community Service's AmeriCorps*USA service program. GAO noted that: (1) for program year 1994 to 1995, the Corporation provided almost $149 million for grantee projects; (2) about 69 percent of matching project contributions came from public sources; (3) total resources available per participant, exclusive of private in-kind contributions, averaged $26,654, of which federal sources provided 74 percent, state and local governments 14 percent, and the private sector 12 percent; (4) cost data could not be determined because most AmeriCorps programs are too new; (5) total available resources for AmeriCorps*USA grantees averaged about $16 per service hour; (6) grantees' projects are designed to meet unmet human, educational, environmental, and public safety needs, strengthen communities, develop civic responsibility, and expand educational opportunities for program participants and others; and (7) to reduce government costs in the 1996-1997 program year, Congress has reduced program appropriations and prohibited federal agencies from receiving AmeriCorps grants, and the Corporation has required certain grantees to reduce proposed costs by 10 percent and all grantees to pay a higher share of program operating costs.
About 3.8 million borrowers took out mortgages in 1996 for purchasing homes, according to information collected through requirements contained in the Home Mortgage Disclosure Act (HMDA). While most of these mortgages were not insured, about 39 percent, or about 1.5 million, were insured. FHA’s share of the home purchase mortgage market was 16 percent in fiscal year 1996, the private mortgage insurers’ (PMIs) share was 17 percent, and the Department of Veterans’ Affairs (VA) share was 5 percent. Lenders usually require mortgage insurance when a home buyer has a down payment of less than 20 percent of the value of the home. In these cases, the loan-to-value (LTV) ratio of the mortgage is higher than 80 percent. Most lenders require mortgage insurance for these loans because they are more likely to default than loans with lower LTV ratios. If a loan with mortgage insurance defaults, the lender may foreclose on the loan and collect all or a portion of the losses from the insurer. Virtually all single-family mortgage insurance is provided by PMIs, FHA, and VA. In general, PMIs operate standard programs for typical borrowers and special affordable programs for qualified borrowers who have fewer down payment funds and need increased underwriting flexibility. FHA provides most of its single-family mortgage insurance through the Section 203(b) program. The Section 203(b) program has not required any federal funds to operate because FHA has collected enough revenue from insurance premiums and foreclosed property sales to cover claims and other expenses. FHA also operates some smaller, specialized single-family mortgage insurance programs. A primary goal of FHA’s single-family programs is to assist households that may be underserved by the private market. VA provides insurance through its Home Loan Guaranty Program to U.S. veterans and their families. FHA, VA, and PMIs provide lenders with guidelines for deciding whether or not a mortgage is eligible for mortgage insurance. In addition, the Federal National Mortgage Association (Fannie Mae) and the Federal Home Loan Mortgage Corporation (Freddie Mac) establish their own guidelines for the loans they will purchase in the secondary mortgage market. A borrower’s ability to repay the mortgage is often evaluated by computing the ratios of the borrower’s total debt burden and housing expenses to his/her income (referred to as “qualifying ratios”). The “total-debt-to-income ratio” compares all of the borrower’s long-term debt payments, including housing expenses, with his/her income. The “housing-expense-to-income ratio” compares the borrower’s expected housing expenses with his/her income. The HMDA database contains information on mortgages insured through FHA’s principal single-family mortgage insurance program—the Section 203(b) program—and loans insured through FHA’s smaller single-family mortgage insurance programs, but does not distinguish between them. Consequently, sections of this testimony on FHA’s market share, the characteristics of FHA borrowers, and the borrowers who may have qualified for private mortgage insurance pertain to all single-family loans insured by FHA. FHA has been a major player in single-family home financing for over 60 years and it remains so today—particularly in certain market segments. Between 1986 and 1990, FHA was the largest insurer of single-family mortgages. The factors contributing to FHA’s large market share during these years may include an increase in FHA’s maximum loan limit in 1988 and economic downturns in some areas of the country that decreased the availability of private mortgage insurance. Except for FHA’s loan limit, the terms, such as maximum LTV ratio, under which FHA and VA mortgage insurance are available do not generally vary across different geographic locations, according to program guidelines. However, PMI companies may change the conditions under which they will provide new insurance in a particular geographic area to reflect the increased risk of losses in an area experiencing economic hardship. By tightening up the terms of the insurance they would provide, PMIs may have decreased their share of the market in economically stressed regions of the country. However, throughout the period from 1991 through 1996, the PMIs had a greater share of all insured single-family mortgage originations than FHA or VA. This change may be a result, in part, of increased premiums for FHA insurance implemented as a result of the Omnibus Budget Reconciliation Act of 1990 (P.L. 101-508). By 1996, the PMIs’ share of insured home purchase mortgages was 44 percent, FHA’s was 42 percent, and VA’s was 13 percent. In our report on FHA’s role, we found that in 1994, FHA-insured home purchase loans were concentrated to a greater extent on low-income and minority borrowers, first-time home buyers, and borrowers with higher LTV ratios than those with loans insured by private mortgage-insurers. In addition, solely on the basis of our analysis of the LTV and qualifying ratios of borrowers who obtained loans in 1995, 66 percent of FHA’s borrowers might not have qualified for private mortgage insurance for the loans they received. However, it is important to note that as with home buyers in general, most low-income and minority home buyers who obtained mortgages in fiscal year 1996 did not have insured mortgages. Recent HMDA, Mortgage Insurance Companies of America (MICA), and HUD data show that FHA-insured loans continue to be concentrated to a greater extent on borrowers with these same characteristics than those with loans insured by private mortgage-insurers. Specifically, we estimate based on HMDA, MICA, and HUD data for loans in 1996 that: FHA insured 23 percent of the 984,495 home purchase loans made to low-income home buyers, and such home buyers represented about 39 percent of FHA-insured loans. We also estimate that FHA insured more of these loans than the PMIs (14 percent) or VA (5 percent). FHA insured 30 percent of all loans made to minority home buyers, and such home buyers represented about 31 percent of FHA-insured loans. FHA insured more loans for minority borrowers in 1996 than the PMIs (14 percent) and substantially more than the VA (6 percent). About 74 percent of FHA-insured loans in 1996 were made to first-time home buyers. FHA insured a higher percentage of loans for first-time home buyers than its overall share of the insured home purchase market. While 63 percent of FHA-insured loans made in 1996 had LTV ratios exceeding 95 percent, only about 7 percent of conventional loans below the maximum FHA loan limit had LTV ratios exceeding 95 percent in 1997. Another major achievement of FHA’s single-family mortgage insurance program has been to restore the financial health of the Mutual Mortgage Insurance Fund (the Fund)—the insurance fund supporting 91 percent of the dollar value of FHA-insured single-family mortgages outstanding as of the end of fiscal year 1997. According to Price Waterhouse’s 1998 actuarial study, the Fund had an economic value/reserves of about $11.3 billion as of September 30, 1997. Over time, insurance premiums and other income have more than covered costs. The $11.3 billion estimate represents an improvement of about $14 billion from the lowest level reached by the Fund—a negative $2.7 billion economic value/reserves estimated by Price Waterhouse at the end of fiscal year 1990. Price Waterhouse also reported that the Fund’s capital reserve ratio (economic value/reserves as a percentage of value of outstanding loans) was 2.81 percent surpassing the legislative target for reserves (a 2-percent capital ratio by November 2000). In addition, Price Waterhouse reported that the Fund will meet the legislative target for fiscal year 2000. They estimate that by then the Fund will have a capital ratio of 3.21 and economic value/reserves of about $15.7 billion. In our 1996 report on FHA’s role, we reported that the FHA, PMIs, and VA mortgage insurance programs differed in terms of maximum LTV ratios and mortgage amounts, the financing of closing costs, and the amount that each will pay lenders to cover the losses associated with foreclosed loans, according to the guidance prepared by the insurers for lenders. Specifically, in our 1996 report, we reported that while both FHA and VA could insure loans with effective LTVs ratios that exceed 100 percent (due to the financing of closing costs or other fees), PMIs did not offer insurance for loans with LTVs ratios greater than 97 percent. Recently, both Fannie Mae and Freddie Mac announced the introduction of conventional 97 percent LTV mortgage products that offer many of the advantages of FHA’s single-family program. Both programs—Fannie Mae’s “Flexible 97 Mortgage” and Freddie Mac’s “Alt 97 Mortgage”—allow down payments as low as 3 percent that can be funded through gifts, unsecured loans from relatives, or grants from nonprofits or local governments. With regard to limits on loan size, FHA today may insure loans only up to a maximum of $170,362 in certain areas with high housing costs, while PMIs and VA permit insurance of larger loans. In connection with settlement costs, FHA allows borrowers to finance most closing costs, but PMIs and VA do not. However, both FHA and VA allow borrowers to finance their insurance premiums. Finally, while FHA protects lenders against nearly 100 percent of the loss associated with a foreclosed mortgage, PMIs and VA limit their coverage to a portion of the mortgage balance. PMIs generally cover only 20 to 35 percent, and VA covers only 25 to 50 percent of the mortgage balance, even if a loss exceeds that amount. With regard to underwriting standards used by FHA, PMIs, and VA, we reported that while there was some differences in qualifying ratios, the guidance provided by the insurers showed few other clear differences in the underwriting standards for borrowers. Each of the insurers permits the lenders to consider compensating factors, such as a large down payment, when a borrower does not meet the qualifying ratios. In addition, although lenders must apply established credit standards, each of the insurers relies on the individual judgment and interpretation of the lenders in evaluating the credit history of borrowers. Since the issuance of our 1996 report, automated underwriting systems that evaluate mortgage applications have been developed which can reduce processing time significantly. Under a joint effort with Freddie Mac, HUD has approved Freddie Mac’s Loan Prospector automated underwriting system to underwrite FHA loans. Besides FHA’s Section 203(b) and VA’s single-family loan programs, the federal government is involved in many other efforts to make homeownership affordable. In our 1996 report on FHA’s role, we reported that HUD at that time operated three grant programs—the Community Development Block Grant program, the HOME Investment Partnership program, and Housing Opportunities for People Everywhere—that promote affordable homeownership. The Federal Home Loan Bank System (FHLBank System) has its Affordable Housing Program and Community Investment Program, which provide subsidies, subsidized advances, or other advances to member institutions to be used to fund affordable housing projects and loans to home buyers. The Department of Agriculture’s Rural Housing Service operates a subsidized direct loan program for low-and very-low-income rural Americans and a guaranteed loan program for moderate-income rural Americans. The state housing finance agencies, through the use of federal tax-exempt mortgage revenue bonds, provide financing for affordable homeownership. The Neighborhood Reinvestment Corporation, through its network of local development organizations and its secondary market organization, promotes affordable homeownership primarily through second mortgages and home buyer education. These programs provide assistance in the form of grants, direct loans, guaranties, interest subsidies, and other forms. We also reported that there are several important distinctions between FHA’s single-family mortgage insurance programs and these other federal programs. First, FHA serves more homeowners than the other programs combined. In 1995, about 570,000 households took out insured loans through FHA’s programs while about 500,000 homeowners may have been reached by the other programs. In addition, at least half of the other programs require federal funds, while FHA’s Section 203(b) program does not. Furthermore, the other programs are generally targeted at borrowers with low incomes or at borrowers who are otherwise underserved by the private market to a greater extent than FHA’s program. FHA’s Section 203(b) program is not restricted to low-income or otherwise underserved borrowers. In fact, diversifying risk by serving a wide variety of borrowers may have actually helped the program operate without federal funds, according to industry officials. Several of the other federal programs assist low- and moderate-income home buyers by combining their assistance with FHA mortgage insurance. A substantial portion of the mortgages made through state housing finance agencies and HUD’s Housing Opportunities for People Everywhere program were insured by FHA in 1994. Similarly, private mortgage insurance may also be combined with assistance from federal housing programs. For example, one private mortgage insurer that we reviewed provided insurance for mortgages assisted through a Neighborhood Reinvestment Corporation program. The federal government also promotes homeownership by requiring major housing finance players to address housing finance needs. Specifically, Fannie Mae and Freddie Mac have legislatively-set goals for affordable homeownership related to their purchase of mortgages made to low- and moderate-income borrowers and in areas of low- and moderate-income. In addition, banks and thrifts are encouraged to lend in all areas of the communities they serve, including low- and moderate-income areas, through the Community Reinvestment Act. The federal government also promotes homeownership for the entire general public through federal tax provisions, such as the home mortgage interest deduction. The Joint Committee on Taxation estimates that, for 1995, the mortgage interest deduction alone was the second largest tax expenditure that the government provides to individuals, totaling an estimated $53.5 billion—exceeding the total tax expenditures given to corporations. While FHA’s Fund is financially healthy and has surpassed the legislative target for reserves, there are challenges facing FHA today, including reducing the losses it incurs on foreclosed properties, maintaining financial self-sufficiency in the face of economic and other factors that could adversely affect future program costs, and resolving year 2000 computing risks. The greater the extent that FHA can improve the efficiency of its lending operations, the greater its ability to maintain financial self-sufficiency in an uncertain future and meet the needs of lower-income borrowers through either increasing the number of borrowers served or reducing the cost of their mortgage insurance. Each year, mortgage lenders foreclose on a portion of the FHA-insured mortgages that go into default and file insurance claims with HUD for their losses. Although FHA has always received enough in premiums from borrowers and other revenues to more than cover these losses, losses totaled about $12.8 billion in 1994 dollars, or about $24,400 for each foreclosed and subsequently sold single-family home over the 19-year period ending in 1993. According to a Price Waterhouse analysis, cumulative foreclosure rates as of September 30, 1997, ranged from a low of 4 percent of the loans FHA insured in the mid-1970s to 19 percent of the loans insured in fiscal year 1981, for loans insured between fiscal years 1975 and 1991. Losses sustained by FHA on foreclosures are financed by the Fund, thereby ultimately reducing the Fund’s ability to withstand economic downturns, and possibly resulting in higher premiums for FHA borrowers. The impact that foreclosures can have on the financial health of the Fund was demonstrated during the 1980s. Until that time, the Fund remained relatively healthy. However, in the 1980s losses were substantial primarily because foreclosure rates were high in economically stressed regions, particularly in the Rocky Mountain and Southwest regions. By the end of fiscal year 1990, the Fund’s economic value/reserves were estimated at about a negative $2.7 billion. If the Fund were unable to finance program and administrative costs, the U.S. Treasury would have to directly cover lenders’ claims and administrative costs. More recently, claims paid by FHA in fiscal year 1997 were higher than expected. Actual claim payments for single-family insured loans totaled $4.5 billion, much higher than the $2.4 billion projected for fiscal year 1997 in the fiscal year 1998 budget. Similarly, actual property acquisitions, properties sold, and the end of fiscal year 1997 inventory level of single-family properties owned by HUD were much higher than projected in the fiscal year 1998 budget. Actual property acquisitions were $4.25 billion compared with $1.9 billion projected, properties sold were $3.8 billion compared with $2.5 billion projected, and the September 30, 1997, inventory of properties totaled $2 billion compared with $880 million projected. HUD attributed these problems in part to increasing claims, especially those from adjustable-rate-mortgages (ARMs). Notwithstanding these unexpected financial results, the present value of estimated cash inflows to FHA’s single-family mortgage insurance program exceed the present value of cash outflows by $1.8 billion for fiscal year 1997. With regard to FHA’s ability to manage risks associated with defaults, annual audits of FHA’s financial statements have identified weaknesses in FHA’s ability to manage risks associated with troubled single-family insured mortgages. The audit report on FHA’s fiscal year 1997 financial statements—the most recent available—identified a material internal control weakness applicable, in varying degrees, to both the single-family and multifamily programs. Specifically, the report stated that FHA must place more emphasis on early warning and loss prevention for insured mortgages by, among other things, focusing its quality assurance enforcement actions on the accuracy of delinquency and default data submitted to FHA. According to the report, FHA does not have adequate systems, processes, or resources to effectively identify and manage risks in its insured portfolios. Timely identification of troubled insured mortgages is a key element of FHA’s efforts to target resources on insured high-risk mortgages. Troubled insured mortgages must be identified before FHA can institute loss mitigation techniques that can reduce eventual claims. The report notes that although the single-family insured mortgage portfolio is large, automated monitoring of insured mortgages using statistical and trend analysis can be used effectively. As we have reported, the Fund’s ability to maintain the target ratio will depend on many economic, program-related, and other factors that will affect the financial health of the Fund in the future. These factors include (1) economic conditions, (2) uncertainty surrounding the projections of the performance of FHA’s streamlined refinanced and ARM loans, and (3) risks associated with the demand for FHA’s loans. We also reported in May 1997, that reducing FHA’s insurance coverage to the level permitted for VA home loans would likely reduce the Fund’s exposure to financial losses, thereby improving its financial health. Estimates of economic value/reserves of the Fund are sensitive to future economic conditions, particularly house price appreciation rates. The Fund will not perform as well if the economic conditions that prevail over the next 30 years replicate those assumed in pessimistic economic scenarios. Price Waterhouse’s estimate of the Fund’s economic value/reserves for its pessimistic economic scenario is about $2.4 billion, or 21 percent, less than its estimate of $11.3 billion as of September 30, 1997. Also, the substantial refinancing of FHA’s loans and the growth in the number of FHA ARMs insured in recent years has created a growing class of FHA borrowers whose future behavior is more difficult to predict than the typical FHA borrower’s. FHA’s streamlined refinanced mortgages and ARMs accounted for about 32 percent of the dollar value of FHA’s loans outstanding at the end of fiscal year 1997—streamlined refinanced mortgages accounted for about 15 percent of the value of the outstanding loans and ARMs for about 17 percent. FHA has little experience with streamlined refinanced mortgages and ARMs and the tendency for such loans to be foreclosed and/or prepaid. Because FHA insured properties for which mortgages were streamlined refinanced were not required to be appraised, the initial LTV ratio of these loans—a key predictor of the probability of foreclosure—is unknown.The impact of these loans on the financial health of the Fund is probably positive, since they represent preexisting FHA business whose risk has been reduced through lower interest rates and lower monthly payments. However, the lack of experience with these loans increases the uncertainty associated with their expected foreclosure rates. This refinancing activity also raises questions about the credit-quality of the loans that were not refinanced despite the fall in interest rates. Since, under these circumstances, most borrowers who could refinance would find it to their financial advantage to do so, those borrowers who did not refinance may not have been able to qualify for a new loan. This suggests that future foreclosure rates on these loans, which originated in previous years when interest rates were higher, may be greater than forecasted. As additional years of experience with these loans are gained, their effect on the Fund’s financial status will become more certain. In addition, new developments in the private mortgage insurance and secondary mortgage markets may increase the average risk of future FHA-insured loans. Home buyers’ demand for FHA-insured loans depends, in part, on the alternatives available to them. Some PMIs have begun offering mortgage insurance coverage on conventional mortgages with a 97-percent LTV ratio, which brings their terms closer to FHA’s 97.75-percent LTV ratio on loans for properties exceeding $50,000 in appraised value. In addition, as discussed previously, Fannie Mae and Freddie Mac recently announced the introduction of conventional 97 percent LTV mortgage products that offer many of the advantages of FHA’s single-family loans. While potential home buyers may consider many other factors when financing their mortgages, such as the fact that FHA will finance the up-front premium as part of the mortgage loan, this action by PMIs, Fannie Mae, and Freddie Mac could reduce the demand for FHA-insured mortgage loans. In particular, by lowering the required down payment, PMIs and others might attract some borrowers who might have otherwise insured their mortgages with FHA. If by selectively offering these low down payment loans, the conventional market is able to attract FHA’s lower-risk borrowers, such as borrowers with better-than-average credit histories or payment-to-income ratios, new FHA loans may become more risky on average. If this effect is substantial, the economic value/reserves of the Fund may be adversely affected, and it may be more difficult for the Fund to maintain a 2-percent capital ratio. Lastly, FHA insures private lenders against nearly all losses resulting from foreclosures on single-family homes it insures. However, VA under its single-family mortgage guaranty program covers only 25 to 50 percent of the original loan amount against losses incurred when borrowers default on loans, leaving lenders responsible for any remaining losses. In our May 1997 report, we concluded that reducing FHA’s insurance coverage to the level permitted for VA home loans would likely reduce the Fund’s exposure to financial losses, thereby improving its financial health. As a result, the Fund’s ability to maintain financial self-sufficiency in an uncertain future would be enhanced. However, reducing FHA’s insurance coverage does pose trade-offs affecting lenders, borrowers, and FHA’s role such as diminishing the federal role in stabilizing markets. Borrowers most likely affected would be low-income, first-time, and minority home buyers and those individuals purchasing older homes. To illustrate the financial impact of reducing FHA’s insurance coverage, our report pointed out that if insurance coverage on FHA’s 1995 loans were reduced to VA’s levels and a reduction in FHA lending volume assumed, the economic value of the loans we estimate would be $52 million to $79 million greater than our estimate assuming no coverage and volume reductions. Reducing FHA’s insurance coverage would likely improve the financial health of the Fund because the reduction in claim payments resulting from lowered insurance coverage would more than offset the decrease in premium income resulting from reduced lending volume. The amount of savings that would be realized by reducing FHA’s insurance coverage would depend on future economic conditions, the volume of loans made, the relationship of the number of higher-risk and lower-risk borrowers that would leave the program, and whether some losses may be shifted from FHA to the Government National Mortgage Association. The financial health of FHA’s Fund could also be adversely affected by Year 2000 computing risks. In March 1998, we testified on the nation’s Year 2000 computing crisis as well as our initial assessment of HUD’s Year 2000 program. The upcoming change of century is a sweeping and urgent challenge for public and private-sector organizations. We reported that, among other things, HUD is behind schedule on a number of its mission-critical systems. While the delays on some of these systems are of only a few days, some are experiencing delays of 2 months or more. This is significant because HUD is reporting that 5 of its mission-critical systems have “failure dates”—the first date that a system will fail to recognize and process dates correctly—between August 1, 1998, and January 1, 1999. In this regard, we reported that HUD’s system for processing claims made by lenders on defaulted single-family-home loans is 75 days behind schedule for renovation. The system is now scheduled to be implemented on November 4—only 58 days shy of January 1, 1999, the date that HUD has determined the current system will fail. In fiscal year 1997, this system processed, on average, a reported $354 million of lenders’ claims each month for defaulted insured loans. If this system fails, these lenders will not be paid on a timely basis; the economic repercussions could be widespread. To better ensure completion of work on mission-critical systems, HUD officials have recently decided to halt routine maintenance on five of its largest systems. Further, according to Year 2000 project officials, if more delays threaten key implementation deadlines for mission-critical systems, they will stop work on nonmission-critical systems in order to focus all resources on the most important ones. We concurred with HUD’s plans to devote additional attention to its mission-critical systems. Before closing, Mr. Chairman, I will discuss two other FHA issues that I understand are of interest to the Subcommittee. In April 1998, we reported on our review of two risk-demonstration programs aimed at facilitating the financing of affordable multifamily housing and HUD’s administration of them. The two risk-sharing demonstration programs established by the Housing and Community Development Act of 1992 offer incentives to financial institutions to facilitate the financing of affordable multifamily housing and to make that financing available in a timely manner. One program provides credit enhancement to state and local housing finance agencies, while the other provides reinsurance to qualified financial institutions. We reported that the credit enhancement program is meeting these goals. As of September 1997, the 32 participating state and local housing finance agencies had reserved about 84 percent of the risk-sharing units allocated to these agencies through March 1996. Most of the insured loans are financing properties that serve more low-income households than required, apparently because the credit enhancement is being used with other subsidies, particularly low-income housing tax credits. While it is still too soon to evaluate the financial performance of the insured loans, the available financial indicators reflect sound underwriting standards. Participation in the credit enhancement program has enabled the housing finance agencies to leverage their reserves and insure loans more quickly. According to the participating agencies, the program would be improved if it were made permanent and the current limits on the number of available risk-sharing units were lifted. These changes, they said, would enable them to market the program and manage their resources for multifamily programs more effectively. Activity in the reinsurance program has been so limited that the program remains largely untested. Only one institution—Fannie Mae—has participated extensively in the program, and one lender—Banc One Capital Funding Corporation—has originated over half of the loans that Fannie Mae has reinsured. Banc One’s activity has demonstrated that the risk-sharing reinsurance program can expand participation in mortgage lending, including lending for smaller properties in rural areas—an unmet capital need, according to HUD’s studies. However, for a variety of reasons, HUD’s other risk-sharing partners have reserved few or none of their risk-sharing units. Opportunities to expand participation include reallocating unused units to Fannie Mae and allowing the use of risk-sharing reinsurance (1) with 18-year balloon mortgages—an option that is currently available only to Fannie Mae—and (2) with loan pools as well as individual loans. Participation in the demonstration programs has enabled HUD to facilitate the financing of affordable multifamily housing while limiting its loss exposure through risk sharing. Participation has also allowed HUD to increase the efficiency and reduce the costs of its operations through delegation, compared with FHA’s traditional multifamily programs. HUD has retained responsibility for monitoring its risk-sharing partners’ performance, but its data system for monitoring the progress of credit enhancement projects is unreliable. HUD is aware of the system’s problems and plans to resolve them in the course of overhauling all of its management information systems. HUD has also retained responsibility for overseeing its risk-sharing partners’ compliance with the demonstration programs’ requirements; however, our review identified one default that was not reported to HUD headquarters for over a year. HUD recognizes that effective oversight is critical, particularly if one or both of the demonstration programs are made permanent and lenders’ activity increases. Our report makes recommendations designed to encourage greater activity in the reinsurance program and to improve HUD’s monitoring and oversight of the federal government’s risk-sharing partners. HUD agreed with our recommendations and said that it was taking or planned to take steps to implement them. We also testified recently on the preliminary results of our assessment of certain aspects of HUD management and oversight of its loan insurance program for home improvements under Title I of the National Housing Act. We reported that our preliminary analysis shows that HUD is not collecting the information needed for managing the program and provides limited oversight of lenders’ compliance with program regulations. We reported that HUD collects little information when loans are made on program borrowers, properties, and loan terms, such as the borrower’s income and the address of the property being improved. Moreover, HUD does not maintain information on why it denies loan claims or why it subsequently approves some for payment. HUD also provides limited oversight of lenders’ compliance with program regulations, conducting only four on-site lender reviews in fiscal year 1997 of the approximately 3,700 program lenders. Regarding the need for oversight of lenders’ compliance, we reported that loan claim files submitted by lenders to HUD following loan defaults often do not contain required loan documents, including the certifications signed by the borrower that the property improvement work has been completed. In addition, some claims were paid by HUD even though there were indications that lenders did not comply with required underwriting standards when insuring the loan. As a result of the management and oversight weaknesses we observed, we reported that our preliminary work indicates that HUD does not know who the program is serving, if lenders are complying with program regulations, and whether certain potential program abuses are occurring, such as violations of the $25,000 limitation on the amount of Title I loan indebtedness for each property. HUD officials attributed these weaknesses to the program’s being lender-operated, limited staff resources, and HUD’s assignment of monitoring priorities. We plan to report on the results of our assessment this summer. In closing, Mr. Chairman, FHA is a prominent player in the home mortgage loan market, particularly for low-income and minority borrowers, first-time home buyers, and borrowers with high LTV ratios. The mortgage loan terms offered by FHA as well as VA still differ in important ways from those offered by PMIs. Solely on the basis of the LTV and qualifying ratios of borrowers, many FHA borrowers in 1995 may not have been able to obtain or could have been delayed in obtaining a home mortgage without the more lenient terms offered by FHA. Also, FHA has been able to serve such borrowers without the need for any federal funds. While FHA’s Mutual Mortgage Insurance Fund, which supports nearly all of FHA’s single-family mortgages, is financially healthy and is projected to continue to improve at least in the near term, improving FHA’s efficiency over its single-family mortgage insurance operation would enhance the Fund’s ability to maintain financial self-sufficiency in an uncertain future and meet the needs of lower-income borrowers through either increasing the number of borrowers served or reducing the cost of insurance for those FHA serves. This is important because forecasts to determine whether FHA will have the funds it needs to cover its losses over the 30-year life of an FHA mortgage are uncertain. Loan performance will depend on a number of economic and other factors over that period, such as uncertainty surrounding the projections of the performance of FHA’s streamlined refinanced and ARM loans. Mr. Chairman, this concludes my statement. We would be pleased to respond to any questions that you or members of the Subcommittee may have. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
GAO discussed: (1) the achievements of the Federal Housing Administration's (FHA) home mortgage insurance program, including the extent that home buyers use FHA insurance, the characteristics of these home buyers--including whether they were first-time home buyers--and how many of them might also qualify for private mortgage insurance; (2) how the insurance terms available through FHA's principal single-family mortgage insurance program compare with private mortgage insurance and guaranties from the Department of Veterans' Affairs (VA); (3) other federal activities that promote affordable homeownership; and (4) challenges faced by FHA in ensuring the financial health of its Mutual Mortgage Insurance Fund--the insurance fund supporting most FHA-insured single-family mortgages. GAO noted that: (1) FHA is a major participant in the single-family housing market; (2) of the approximately 3.8 million home purchase loans made in fiscal year 1996, FHA insured 16 percent; (3) while most of these mortgages were not insured, about 39 percent were; (4) FHA insured 42 percent of all home purchase loans in 1996 and fulfilled a larger role in some specific market segments, particularly low-income home buyers and minorities; (5) most borrowers were able to obtain a home purchase mortgage without insurance by either FHA, the private mortgage insurers, or VA; (6) while a third of the loans FHA insured in 1995 might have qualified for private mortgage insurance, the other two-thirds probably would not have qualified, on the basis of the loan-to-value and qualifying ratios of the loans FHA insured; (7) FHA and VA programs permit borrowers to make smaller down payments and have higher total-debt-to-income ratios than allowed by private mortgage insurers; (8) FHA's program differs from private mortgage insurers' and VA's programs in that it allows closing costs to be financed in the mortgage; (9) in addition to FHA and VA, the federal government promotes affordable homeownership through programs run by the Department of Housing and Urban Development, the Department of Agriculture's Rural Housing Service, the Federal Home Loan Bank System, state housing finance agencies, and Neighborhood Reinvestment Corporation; (10) although these other federal programs share FHA's mission to assist households who may be underserved by the private mortgage market, none reach as many households as FHA; (11) several of these other programs assist home buyers by combining their assistance with FHA mortgage insurance; (12) the federal government promotes homeownership among buyers who might otherwise by underserved through requirements placed upon the Federal National Mortgage Association, Federal Home Loan Mortgage Corporation, and certain lenders; and (13) although FHA's single-family program is financially self-sufficient, there are challenges facing FHA today, including reducing the losses it incurs on foreclosed properties, maintaining financial self-sufficiency in the face of economic and other factors that could adversely affect future program costs, and resolving year 2000 computing risks.